A theoretical formulation of wave-vortex interactions
NASA Technical Reports Server (NTRS)
Wu, J. Z.; Wu, J. M.
1989-01-01
A unified theoretical formulation for wave-vortex interaction, designated the '(omega, Pi) framework,' is presented. Based on the orthogonal decomposition of fluid dynamic interactions, the formulation can be used to study a variety of problems, including the interaction of a longitudinal (acoustic) wave and/or transverse (vortical) wave with a main vortex flow. Moreover, the formulation permits a unified treatment of wave-vortex interaction at various approximate levels, where the normal 'piston' process and tangential 'rubbing' process can be approximated dfferently.
Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Tarkenton, G. M.
1989-01-01
The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.
Nonlinear elasticity in rocks: A comprehensive three-dimensional description
Lott, Martin; Remillieux, Marcel; Garnier, Vincent; ...
2017-07-17
Here we study theoretically and experimentally the mechanisms of nonlinear and nonequilibrium dynamics in geomaterials through dynamic acoustoelasticity testing. In the proposed theoretical formulation, the classical theory of nonlinear elasticity is extended to include the effects of conditioning. This formulation is adapted to the context of dynamic acoustoelasticity testing in which a low-frequency “pump” wave induces a strain field in the sample and modulates the propagation of a high-frequency “probe” wave. Experiments are conducted to validate the formulation in a long thin bar of Berea sandstone. Several configurations of the pump and probe are examined: the pump successively consists ofmore » the first longitudinal and first torsional mode of vibration of the sample while the probe is successively based on (pressure) $P$ and (shear) $S$ waves. The theoretical predictions reproduce many features of the elastic response observed experimentally, in particular, the coupling between nonlinear and nonequilibrium dynamics and the three-dimensional effects resulting from the tensorial nature of elasticity.« less
Access point selection game with mobile users using correlated equilibrium.
Sohn, Insoo
2015-01-01
One of the most important issues in wireless local area network (WLAN) systems with multiple access points (APs) is the AP selection problem. Game theory is a mathematical tool used to analyze the interactions in multiplayer systems and has been applied to various problems in wireless networks. Correlated equilibrium (CE) is one of the powerful game theory solution concepts, which is more general than the Nash equilibrium for analyzing the interactions in multiplayer mixed strategy games. A game-theoretic formulation of the AP selection problem with mobile users is presented using a novel scheme based on a regret-based learning procedure. Through convergence analysis, we show that the joint actions based on the proposed algorithm achieve CE. Simulation results illustrate that the proposed algorithm is effective in a realistic WLAN environment with user mobility and achieves maximum system throughput based on the game-theoretic formulation.
Access Point Selection Game with Mobile Users Using Correlated Equilibrium
Sohn, Insoo
2015-01-01
One of the most important issues in wireless local area network (WLAN) systems with multiple access points (APs) is the AP selection problem. Game theory is a mathematical tool used to analyze the interactions in multiplayer systems and has been applied to various problems in wireless networks. Correlated equilibrium (CE) is one of the powerful game theory solution concepts, which is more general than the Nash equilibrium for analyzing the interactions in multiplayer mixed strategy games. A game-theoretic formulation of the AP selection problem with mobile users is presented using a novel scheme based on a regret-based learning procedure. Through convergence analysis, we show that the joint actions based on the proposed algorithm achieve CE. Simulation results illustrate that the proposed algorithm is effective in a realistic WLAN environment with user mobility and achieves maximum system throughput based on the game-theoretic formulation. PMID:25785726
Developing Emotion-Based Case Formulations: A Research-Informed Method.
Pascual-Leone, Antonio; Kramer, Ueli
2017-01-01
New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Pharmaceutical Perspective on Opalescence and Liquid-Liquid Phase Separation in Protein Solutions.
Raut, Ashlesha S; Kalonia, Devendra S
2016-05-02
Opalescence in protein solutions reduces aesthetic appeal of a formulation and can be an indicator of the presence of aggregates or precursor to phase separation in solution signifying reduced product stability. Liquid-liquid phase separation of a protein solution into a protein-rich and a protein-poor phase has been well-documented for globular proteins and recently observed for monoclonal antibody solutions, resulting in physical instability of the formulation. The present review discusses opalescence and liquid-liquid phase separation (LLPS) for therapeutic protein formulations. A brief discussion on theoretical concepts based on thermodynamics, kinetics, and light scattering is presented. This review also discusses theoretical concepts behind intense light scattering in the vicinity of the critical point termed as "critical opalescence". Both opalescence and LLPS are affected by the formulation factors including pH, ionic strength, protein concentration, temperature, and excipients. Literature reports for the effect of these formulation factors on attractive protein-protein interactions in solution as assessed by the second virial coefficient (B2) and the cloud-point temperature (Tcloud) measurements are also presented. The review also highlights pharmaceutical implications of LLPS in protein solutions.
Experimental validation of ultrasonic guided modes in electrical cables by optical interferometry.
Mateo, Carlos; de Espinosa, Francisco Montero; Gómez-Ullate, Yago; Talavera, Juan A
2008-03-01
In this work, the dispersion curves of elastic waves propagating in electrical cables and in bare copper wires are obtained theoretically and validated experimentally. The theoretical model, based on Gazis equations formulated according to the global matrix methodology, is resolved numerically. Viscoelasticity and attenuation are modeled theoretically using the Kelvin-Voigt model. Experimental tests are carried out using interferometry. There is good agreement between the simulations and the experiments despite the peculiarities of electrical cables.
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general formulation is presented for the analysis of steady and unsteady, subsonic and supersonic aerodynamics for complex aircraft configurations. The theoretical formulation, the numerical procedure, the description of the program SOUSSA (steady, oscillatory and unsteady, subsonic and supersonic aerodynamics) and numerical results are included. In particular, generalized forces for fully unsteady (complex frequency) aerodynamics for a wing-body configuration, AGARD wing-tail interference in both subsonic and supersonic flows as well as flutter analysis results are included. The theoretical formulation is based upon an integral equation, which includes completely arbitrary motion. Steady and oscillatory aerodynamic flows are considered. Here small-amplitude, fully transient response in the time domain is considered. This yields the aerodynamic transfer function (Laplace transform of the fully unsteady operator) for frequency domain analysis. This is particularly convenient for the linear systems analysis of the whole aircraft.
Metallized gelled monopropellants
NASA Technical Reports Server (NTRS)
Nieder, Erin G.; Harrod, Charles E.; Rodgers, Frederick C.; Rapp, Douglas C.; Palaszewski, Bryan A.
1992-01-01
Thermochemical calculations of seven metallized monopropellants were conducted to quantify theoretical specific impulse and density specific impulse performance. On the basis of theoretical performance, commercial availability of formulation constituents, and anticipated viscometric behavior, two metallized monopropellants were selected for formulation characterization: triethylene glycol dinitrate, ammonium perchlorate, aluminum and hydrogen peroxide, aluminum. Formulation goals were established, and monopropellant formulation compatibility and hazard sensitivity were experimentally determined. These experimental results indicate that the friction sensitivity, detonation susceptibility, and material handling difficulties of the elevated monopropellant formulations and their constituents pose formidable barriers to their future application as metallized monopropellants.
Group theoretical formulation of free fall and projectile motion
NASA Astrophysics Data System (ADS)
Düztaş, Koray
2018-07-01
In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.
Three-dimensional compact explicit-finite difference time domain scheme with density variation
NASA Astrophysics Data System (ADS)
Tsuchiya, Takao; Maruta, Naoki
2018-07-01
In this paper, the density variation is implemented in the three-dimensional compact-explicit finite-difference time-domain (CE-FDTD) method. The formulation is first developed based on the continuity equation and the equation of motion, which include the density. Some numerical demonstrations are performed for the three-dimensional sound wave propagation in a two density layered medium. The numerical results are compared with the theoretical results to verify the proposed formulation.
MULTIVARIATERESIDUES : A Mathematica package for computing multivariate residues
NASA Astrophysics Data System (ADS)
Larsen, Kasper J.; Rietkerk, Robbert
2018-01-01
Multivariate residues appear in many different contexts in theoretical physics and algebraic geometry. In theoretical physics, they for example give the proper definition of generalized-unitarity cuts, and they play a central role in the Grassmannian formulation of the S-matrix by Arkani-Hamed et al. In realistic cases their evaluation can be non-trivial. In this paper we provide a Mathematica package for efficient evaluation of multivariate residues based on methods from computational algebraic geometry.
Comparison of information theoretic divergences for sensor management
NASA Astrophysics Data System (ADS)
Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael
2011-06-01
In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.
Estimating 3D positions and velocities of projectiles from monocular views.
Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P
2009-05-01
In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.
NASA Technical Reports Server (NTRS)
Sadler, S. G.
1972-01-01
A mathematical model and computer program were implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. The theoretical formulation and analysis of results are presented.
Formulation and Solid State Characterization of Nicotinamide-based Co-crystals of Fenofibrate
Shewale, Sheetal; Shete, A. S.; Doijad, R. C.; Kadam, S. S.; Patil, V. A.; Yadav, A. V.
2015-01-01
The present investigation deals with formulation of nicotinamide-based co-crystals of fenofibrate by different methods and solid-state characterization of the prepared co-crystals. Fenofibrate and nicotinamide as a coformer in 1:1 molar ratio were used to formulate molecular complexes by kneading, solution crystallization, antisolvent addition and solvent drop grinding methods. The prepared molecular complexes were characterized by powder X-ray diffractometry, differential scanning calorimetry, Fourier transform infrared spectroscopy, nuclear magnetic resonance spectroscopy and in vitro dissolution study. Considerable improvement in the dissolution rate of fenofibrate from optimized co-crystal formulation was due to an increased solubility that is attributed to the super saturation from the fine co-crystals is faster because of large specific surface area of small particles and prevention of phase transformation to pure fenofibrate. In vitro dissolution study showed that the formation of co-crystals improves the dissolution rate of fenofibrate. Nicotinamide forms the co-crystals with fenofibrate, theoretically and practically. PMID:26180279
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Theoretical study on the sound absorption of electrolytic solutions. I. Theoretical formulation.
Yamaguchi, T; Matsuoka, T; Koda, S
2007-04-14
A theory is formulated that describes the sound absorption of electrolytic solutions due to the relative motion of ions, including the formation of ion pairs. The theory is based on the Kubo-Green formula for the bulk viscosity. The time correlation function of the pressure is projected onto the bilinear product of the density modes of ions. The time development of the product of density modes is described by the diffusive limit of the generalized Langevin equation, and approximate expressions for the three- and four-body correlation functions required are given with the hypernetted-chain integral equation theory. Calculations on the aqueous solutions of model electrolytes are performed. It is demonstrated that the theory describes both the activated barrier crossing between contact and solvent-separated ion pairs and the Coulombic correlation between ions.
Theoretical study on the sound absorption of electrolytic solutions. I. Theoretical formulation
NASA Astrophysics Data System (ADS)
Yamaguchi, T.; Matsuoka, T.; Koda, S.
2007-04-01
A theory is formulated that describes the sound absorption of electrolytic solutions due to the relative motion of ions, including the formation of ion pairs. The theory is based on the Kubo-Green formula for the bulk viscosity. The time correlation function of the pressure is projected onto the bilinear product of the density modes of ions. The time development of the product of density modes is described by the diffusive limit of the generalized Langevin equation, and approximate expressions for the three- and four-body correlation functions required are given with the hypernetted-chain integral equation theory. Calculations on the aqueous solutions of model electrolytes are performed. It is demonstrated that the theory describes both the activated barrier crossing between contact and solvent-separated ion pairs and the Coulombic correlation between ions.
Visual Expertise as Embodied Practice
ERIC Educational Resources Information Center
Ivarsson, Jonas
2017-01-01
This study looks at the practice of thoracic radiology and follows a group of radiologists and radiophysicists in their efforts to find, discuss, and formulate issues or troubles ensuing the implementation of a new radiographic imaging technology. Based in the theoretical tradition of ethnomethodology it examines the local endogenous practices…
NASA Astrophysics Data System (ADS)
Guisasola, Jenaro; Ceberio, Mikel; Zubimendi, José Luis
2006-09-01
The study we present tries to explore how first year engineering students formulate hypotheses in order to construct their own problem solving structure when confronted with problems in physics. Under the constructivistic perspective of the teaching-learning process, the formulation of hypotheses plays a key role in contrasting the coherence of the students' ideas with the theoretical frame. The main research instrument used to identify students' reasoning is the written report by the student on how they have attempted four problem solving tasks in which they have been asked explicitly to formulate hypotheses. The protocols used in the assessment of the solutions consisted of a semi-quantitative study based on grids designed for the analysis of written answers. In this paper we have included two of the tasks used and the corresponding scheme for the categorisation of the answers. Details of the other two tasks are also outlined. According to our findings we would say that the majority of students judge a hypothesis to be plausible if it is congruent with their previous knowledge without rigorously checking it against the theoretical framework explained in class.
Analysis of NASA JP-4 fire tests data and development of a simple fire model
NASA Technical Reports Server (NTRS)
Raj, P.
1980-01-01
The temperature, velocity and species concentration data obtained during the NASA fire tests (3m, 7.5m and 15m diameter JP-4 fires) were analyzed. Utilizing the data analysis, a sample theoretical model was formulated to predict the temperature and velocity profiles in JP-4 fires. The theoretical model, which does not take into account the detailed chemistry of combustion, is capable of predicting the extent of necking of the fire near its base.
1991-07-01
provide poor representations of overdriven detonation. The Jones-Wilkens- Lee-Baker ( JWLB ) has been formulated to provide a more accurate representation...Chapman-Jouguet state. The resulting equation of state form, named Jones-Wilkens-Lee-Baker ( JWLB ), is P. A,[-+ e-R-iV -t-V-4- C(1 V(wl 1 where, ,=L(AAi...is the specific internal energy. The JWLB equation of state form is based on a first order expansion around the principal isentrope: A, .’ie’R iV + CV
Theoretical Studies of Alfven Waves and Energetic Particle Physics in Fusion Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Liu
This report summarizes major theoretical findings in the linear as well as nonlinear physics of Alfvén waves and energetic particles in magnetically confined fusion plasmas. On the linear physics, a variational formulation, based on the separation of singular and regular spatial scales, for drift-Alfvén instabilities excited by energetic particles is established. This variational formulation is then applied to derive the general fishbone-like dispersion relations corresponding to the various Alfvén eigenmodes and energetic-particle modes. It is further employed to explore in depth the low-frequency Alfvén eigenmodes and demonstrate the non-perturbative nature of the energetic particles. On the nonlinear physics, new novelmore » findings are obtained on both the nonlinear wave-wave interactions and nonlinear wave-energetic particle interactions. It is demonstrated that both the energetic particles and the fine radial mode structures could qualitatively affect the nonlinear evolution of Alfvén eigenmodes. Meanwhile, a theoretical approach based on the Dyson equation is developed to treat self-consistently the nonlinear interactions between Alfvén waves and energetic particles, and is then applied to explain simulation results of energetic-particle modes. Relevant list of journal publications on the above findings is also included.« less
ERIC Educational Resources Information Center
Vowles, Kevin E.; Wetherell, Julie Loebach; Sorrell, John T.
2009-01-01
Cognitive behavior therapy (CBT) for chronic pain is effective, although a number of issues in need of clarification remain, including the processes by which CBT works, the role of cognitive changes in the achievement of outcomes, and the formulation of a coherent theoretical model. Recent developments in psychology have attempted to address these…
NASA Astrophysics Data System (ADS)
Shen, Yanfeng; Cesnik, Carlos E. S.
2016-04-01
This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.
Group theoretical approach to the Dirac operator on S 2
NASA Astrophysics Data System (ADS)
Gutiérrez, Sergio; Huet, Idrish
2018-04-01
In this revision we outline the group theoretical approach to formulate and solve the eigenvalue problem of the Dirac operator on the round 2-sphere conceived as the right coset S 2 = SU(2)/U(1). Starting from general symmetry considerations we illustrate the formulation of the Dirac operator through left action or right action differential operators, whose properties on a right coset are quite different. The construction of the spinor space and the solution of the spectral problem using group theoretical methods is also presented.
ERIC Educational Resources Information Center
Tietze, Irene Nowell; Shakeshaft, Charol
An exploration in the context of feminist science of one theoretical basis of educational administration--Abraham Maslow's theory of human motivation and self-actualization--finds an androcentric bias in Maslow's methodology, philosophical underpinnings, and theory formulation. Maslow's hypothetico-deductive methodology was based on a…
Children's Responses to Fantasy in Relation to Their Stages of Intellectual Development.
ERIC Educational Resources Information Center
Harms, Jeanne McLain
Girls' responses to fantasy in children's literature as related to a conceptual framework (extrapolated from books of modern fantasy) of intellectual development (based on Piaget's theoretical formulations) were investigated. The three stages of thinking corresponded to the ages of the subjects: five year olds represented the preoperational stage,…
An Informational-Theoretical Formulation of the Second Law of Thermodynamics
ERIC Educational Resources Information Center
Ben-Naim, Arieh
2009-01-01
This paper presents a formulation of the second law of thermodynamics couched in terms of Shannon's measure of information. This formulation has an advantage over other formulations of the second law. First, it shows explicitly what is the thing that changes in a spontaneous process in an isolated system, which is traditionally referred to as the…
Rothgangel, Andreas; Braun, Susy; de Witte, Luc; Beurskens, Anna; Smeets, Rob
2016-04-01
To describe the development and content of a clinical framework for mirror therapy (MT) in patients with phantom limb pain (PLP) following amputation. Based on an a priori formulated theoretical model, 3 sources of data collection were used to develop the clinical framework. First, a review of the literature took place on important clinical aspects and the evidence on the effectiveness of MT in patients with phantom limb pain. In addition, questionnaires and semi-structured interviews were used to analyze clinical experiences and preferences of physical and occupational therapists and patients suffering from PLP regarding the application of MT. All data were finally clustered into main and subcategories and were used to complement and refine the theoretical model. For every main category of the a priori formulated theoretical model, several subcategories emerged from the literature search, patient, and therapist interviews. Based on these categories, we developed a clinical flowchart that incorporates the main and subcategories in a logical way according to the phases in methodical intervention defined by the Royal Dutch Society for Physical Therapy. In addition, we developed a comprehensive booklet that illustrates the individual steps of the clinical flowchart. In this study, a structured clinical framework for the application of MT in patients with PLP was developed. This framework is currently being tested for its effectiveness in a multicenter randomized controlled trial. © 2015 World Institute of Pain.
Multi-Stage Convex Relaxation Methods for Machine Learning
2013-03-01
Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.
Bidirectional composition on lie groups for gradient-based image alignment.
Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick
2010-09-01
In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.
Determination of mixed mode (I/II) SIFs of cracked orthotropic materials
NASA Astrophysics Data System (ADS)
Chakraborty, D.; Chakraborty, Debaleena; Murthy, K. S. R. K.
2018-05-01
Strain gage techniques have been successfully but sparsely used for the determination of stress intensity factors (SIFs) of orthotropic materials. For mode I cases, few works have been reported on the strain gage based determination of mode I SIF of orthotropic materials. However, for mixed mode (I/II) cases, neither a theoretical development of a strain gage based technique nor any recommended guidelines for minimum number of strain gages and their locations were reported in the literature for determination of mixed mode SIFs. The authors for the first time came up with a theoretical proposition to successfully use strain gages for determination of mixed mode SIFs of orthotropic materials [1]. Based on these formulations, the present paper discusses a finite element (FE) based numerical simulation of the proposed strain gage technique employing [902/0]10S carbon-epoxy laminates with a slant edge crack. An FE based procedure has also been presented for determination of the optimal radial locations of the strain gages apriori to actual experiments. To substantiate the efficacy of the proposed technique, numerical simulations for strain gage based determination of mixed mode SIFs have been conducted. Results show that it is possible to accurately determine the mixed mode SIFs of orthotropic laminates when the strain gages are placed within the optimal radial locations estimated using the present formulation.
High-Performance Monopropellants and Catalysts Evaluated
NASA Technical Reports Server (NTRS)
Reed, Brian D.
2004-01-01
The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.
On the combined gradient-stochastic plasticity model: Application to Mo-micropillar compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, A. A., E-mail: akonsta@civil.auth.gr; Zhang, X., E-mail: zhangxu26@126.com; Aifantis, E. C., E-mail: mom@mom.gen.auth.gr
2015-02-17
A formulation for addressing heterogeneous material deformation is proposed. It is based on the use of a stochasticity-enhanced gradient plasticity model implemented through a cellular automaton. The specific application is on Mo-micropillar compression, for which the irregularities of the strain bursts observed have been experimentally measured and theoretically interpreted through Tsallis' q-statistics.
NASA Technical Reports Server (NTRS)
Borg, S. F.
1976-01-01
A generalized applied group theory is developed, and it is shown that phenomena from a number of diverse disciplines may be included under the umbrella of a single theoretical formulation based upon the concept of a group consistent with the usual definition of this term.
The SEURAT-1 approach towards animal free human safety assessment.
Gocht, Tilman; Berggren, Elisabet; Ahr, Hans Jürgen; Cotgreave, Ian; Cronin, Mark T D; Daston, George; Hardy, Barry; Heinzle, Elmar; Hescheler, Jürgen; Knight, Derek J; Mahony, Catherine; Peschanski, Marc; Schwarz, Michael; Thomas, Russell S; Verfaillie, Catherine; White, Andrew; Whelan, Maurice
2015-01-01
SEURAT-1 is a European public-private research consortium that is working towards animal-free testing of chemical compounds and the highest level of consumer protection. A research strategy was formulated based on the guiding principle to adopt a toxicological mode-of-action framework to describe how any substance may adversely affect human health.The proof of the initiative will be in demonstrating the applicability of the concepts on which SEURAT-1 is built on three levels:(i) Theoretical prototypes for adverse outcome pathways are formulated based on knowledge already available in the scientific literature on investigating the toxicological mode-of-actions leading to adverse outcomes (addressing mainly liver toxicity);(ii)adverse outcome pathway descriptions are used as a guide for the formulation of case studies to further elucidate the theoretical model and to develop integrated testing strategies for the prediction of certain toxicological effects (i.e., those related to the adverse outcome pathway descriptions);(iii) further case studies target the application of knowledge gained within SEURAT-1 in the context of safety assessment. The ultimate goal would be to perform ab initio predictions based on a complete understanding of toxicological mechanisms. In the near-term, it is more realistic that data from innovative testing methods will support read-across arguments. Both scenarios are addressed with case studies for improved safety assessment. A conceptual framework for a rational integrated assessment strategy emerged from designing the case studies and is discussed in the context of international developments focusing on alternative approaches for evaluating chemicals using the new 21st century tools for toxicity testing.
A gauge-theoretic approach to gravity.
Krasnov, Kirill
2012-08-08
Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.
NASA Astrophysics Data System (ADS)
Boughariou, F.; Chouikhi, S.; Kallel, A.; Belgaroui, E.
2015-12-01
In this paper, we present a new theoretical and numerical formulation for the electrical and thermal breakdown phenomena, induced by charge packet dynamics, in low-density polyethylene (LDPE) insulating film under dc high applied field. The theoretical physical formulation is composed by the equations of bipolar charge transport as well as by the thermo-electric coupled equation associated for the first time in modeling to the bipolar transport problem. This coupled equation is resolved by the finite-element numerical model. For the first time, all bipolar transport results are obtained under non-uniform temperature distributions in the sample bulk. The principal original results show the occurring of very sudden abrupt increase in local temperature associated to a very sharp increase in external and conduction current densities appearing during the steady state. The coupling between these electrical and thermal instabilities reflects physically the local coupling between electrical conduction and thermal joule effect. The results of non-uniform temperature distributions induced by non-uniform electrical conduction current are also presented for several times. According to our formulation, the strong injection current is the principal factor of the electrical and thermal breakdown of polymer insulating material. This result is shown in this work. Our formulation is also validated experimentally.
Leng, Donglei; Thanki, Kaushik; Fattal, Elias; Foged, Camilla; Yang, Mingshi
2017-08-25
Chronic obstructive pulmonary disease (COPD) is a complex disease, characterized by persistent airflow limitation and chronic inflammation. The purpose of this study was to design lipid-polymer hybrid nanoparticles (LPNs) loaded with the corticosteroid, budesonide, which could potentially be combined with small interfering RNA (siRNA) for COPD management. Here, we prepared LPNs based on the biodegradable polymer poly(dl-lactic-co-glycolic acid) (PLGA) and the cationic lipid dioleyltrimethylammonium propane (DOTAP) using a double emulsion solvent evaporation method. A quality-by-design (QbD) approach was adopted to define the optimal formulation parameters. The quality target product profile (QTPP) of the LPNs was identified based on risk assessment. Two critical formulation parameters (CFPs) were identified, including the theoretical budesonide loading and the theoretical DOTAP loading. The CFPs were linked to critical quality attributes (CQAs), which included the intensity-based hydrodynamic particle diameter (z-average), the polydispersity index (PDI), the zeta-potential, the budesonide encapsulation efficiency, the actual budesonide loading and the DOTAP encapsulation efficiency. A response surface methodology (RSM) was applied for the experimental design to evaluate the influence of the CFPs on the CQAs, and to identify the optimal operation space (OOS). All nanoparticle dispersions displayed monodisperse size distributions (PDI<0.2) with z-averages of approximately 150nm, suggesting that the size is not dependent on the investigated CFPs. In contrast, the zeta-potential was highly dependent on the theoretical DOTAP loading. Upon increased DOTAP loading, the zeta-potential reached a maximal point, after which it remained stable at the maximum value. This suggests that the LPN surface is covered by DOTAP, and that the DOTAP loading is saturable. The actual budesonide loading of the LPNs was mainly dependent on the initial amount of budesonide, and a clear positive effect was observed, which shows that the interaction between drug and PLGA increases when increasing the initial amount of budesonide. The OOS was modeled by applying the QTPP. The OOS had a budesonide encapsulation efficiency higher than 30%, a budesonide loading above 15μg budesonide/mg PLGA, a zeta-potential higher than 35mV and a DOTAP encapsulation efficiency above 50%. This study shows the importance of systematic formulation design for understanding the effect of formulation parameters on the characteristics of LPNs, eventually resulting in the identification of an OOS. Copyright © 2017 Elsevier B.V. All rights reserved.
A solution to the biodiversity paradox by logical deterministic cellular automata.
Kalmykov, Lev V; Kalmykov, Vyacheslav L
2015-06-01
The paradox of biological diversity is the key problem of theoretical ecology. The paradox consists in the contradiction between the competitive exclusion principle and the observed biodiversity. The principle is important as the basis for ecological theory. On a relatively simple model we show a mechanism of indefinite coexistence of complete competitors which violates the known formulations of the competitive exclusion principle. This mechanism is based on timely recovery of limiting resources and their spatio-temporal allocation between competitors. Because of limitations of the black-box modeling there was a problem to formulate the exclusion principle correctly. Our white-box multiscale model of two-species competition is based on logical deterministic individual-based cellular automata. This approach provides an automatic deductive inference on the basis of a system of axioms, and gives a direct insight into mechanisms of the studied system. It is one of the most promising methods of artificial intelligence. We reformulate and generalize the competitive exclusion principle and explain why this formulation provides a solution of the biodiversity paradox. In addition, we propose a principle of competitive coexistence.
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1974-01-01
A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.
Ion Structure Near a Core-Shell Dielectric Nanoparticle
NASA Astrophysics Data System (ADS)
Ma, Manman; Gan, Zecheng; Xu, Zhenli
2017-02-01
A generalized image charge formulation is proposed for the Green's function of a core-shell dielectric nanoparticle for which theoretical and simulation investigations are rarely reported due to the difficulty of resolving the dielectric heterogeneity. Based on the formulation, an efficient and accurate algorithm is developed for calculating electrostatic polarization charges of mobile ions, allowing us to study related physical systems using the Monte Carlo algorithm. The computer simulations show that a fine-tuning of the shell thickness or the ion-interface correlation strength can greatly alter electric double-layer structures and capacitances, owing to the complicated interplay between dielectric boundary effects and ion-interface correlations.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
Advanced turboprop noise prediction based on recent theoretical results
NASA Technical Reports Server (NTRS)
Farassat, F.; Padula, S. L.; Dunn, M. H.
1987-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Quantum electron-vibrational dynamics at finite temperature: Thermo field dynamics approach
NASA Astrophysics Data System (ADS)
Borrelli, Raffaele; Gelin, Maxim F.
2016-12-01
Quantum electron-vibrational dynamics in molecular systems at finite temperature is described using an approach based on the thermo field dynamics theory. This formulation treats temperature effects in the Hilbert space without introducing the Liouville space. A comparison with the theoretically equivalent density matrix formulation shows the key numerical advantages of the present approach. The solution of thermo field dynamics equations with a novel technique for the propagation of tensor trains (matrix product states) is discussed. Numerical applications to model spin-boson systems show that the present approach is a promising tool for the description of quantum dynamics of complex molecular systems at finite temperature.
Mixed finite-element formulations in piezoelectricity and flexoelectricity
2016-01-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a ‘weighted integral sense’ to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application. PMID:27436967
Mixed finite-element formulations in piezoelectricity and flexoelectricity.
Mao, Sheng; Purohit, Prashant K; Aravas, Nikolaos
2016-06-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a 'weighted integral sense' to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application.
Thermal shock fracture in cross-ply fibre-reinforced ceramic-matrix composites
NASA Astrophysics Data System (ADS)
Kastritseas, C.; Smith, P. A.; Yeomans, J. A.
2010-11-01
The onset of matrix cracking due to thermal shock in a range of simple and multi-layer cross-ply laminates comprising a calcium aluminosilicate (CAS) matrix reinforced with Nicalon® fibres is investigated analytically. A comprehensive stress analysis under conditions of thermal shock, ignoring transient effects, is performed and fracture criteria based on either a recently derived model for the thermal shock resistance of unidirectional Nicalon®/glass ceramic-matrix composites or fracture mechanics considerations are formulated. The effect of material thickness on the apparent thermal shock resistance is also modelled. Comparison with experimental results reveals that the accuracy of the predictions is satisfactory and the reasons for some discrepancies are discussed. In addition, a theoretical argument based on thermal shock theory is formulated to explain the observed cracking patterns.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
Forensic case formulation: theoretical, ethical and practical issues.
Davies, Jason; Black, Susie; Bentley, Natalie; Nagi, Claire
2013-10-01
Forensic case formulation, of increasing interest to practitioners and researchers raises many ethical, theoretical and practical issues for them. Systemic, contextual and individual factors which need to be considered include the multitude of staff often involved with any one individual, the pressure to 'get it right' because of the range of risk implications that are associated with individuals within forensic mental health settings, and individual parameters, for example reluctance to be engaged with services. Copyright © 2013 John Wiley & Sons, Ltd.
Models, Data, and War: a Critique of the Foundation for Defense Analyses.
1980-03-12
scientific formulation 6 An "objective" solution 8 Analysis of a squishy problem 9 A judgmental formulation 9 A potential for distortion 11 A subjective...inextricably tied to those judgments. Different analysts, with apparently identical knowledge of a real world problem, may develop plausible formulations ...configured is a concrete theoretical statement." 2/ The formulation of a computer model--conceiving a mathematical representation of the real world
Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.
Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C
2014-08-01
Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
Objectives of the Airline Firm: Theory
NASA Technical Reports Server (NTRS)
Kneafsey, J. T.
1972-01-01
Theoretical models are formulated for airline firm operations that revolve around alternative formulations of managerial goals which these firms are persuing in practice. Consideration is given to the different objective functions which the companies are following in lieu of profit maximization.
A gauge-theoretic approach to gravity
Krasnov, Kirill
2012-01-01
Einstein's general relativity (GR) is a dynamical theory of the space–time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang–Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach. PMID:22792040
NASA Astrophysics Data System (ADS)
Shahbazi, AmirHossein; Koohian, Ata; Madanipour, Khosro
2017-01-01
In this paper continuous wave laser scribing of the metal thin films have been investigated theoretically and experimentally. A formulation is presented based on parameters like beam power, spot size, scanning speed and fluence thresholds. The role of speed on the transient temperature and tracks width is studied numerically. By using two frameworks of pulsed laser ablation of thin films and laser printing on paper, the relation between ablation width and scanning speed has been derived. Furthermore, various speeds of the focused 450 nm continuous laser diode with an elliptical beam spot applied to a 290 nm copper thin film coated on glass, experimentally. The beam power was 150 mW after spatial filtering. By fitting the theoretical formulation to the experimental data, the threshold fluence and energy were obtained to be 13.2 J mm-2 and 414~μ J respectively. An anticipated theoretical parameter named equilibrium~border was verified experimentally. It shows that in the scribing of the 290 nm copper thin film, at a distance where the intensity reaches about 1/e of its maximum value, the absorbed fluence on the surface is equal to zero. Therefore the application of continuous laser in metal thin film ablation has different mechanism from pulsed laser drilling and beam scanning in printers.
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Padula, S. L.
1986-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Amphotericin B releasing topical nanoemulsion for the treatment of candidiasis and aspergillosis.
Sosa, Lilian; Clares, Beatriz; Alvarado, Helen L; Bozal, Nuria; Domenech, Oscar; Calpena, Ana C
2017-10-01
The present study was designed to develop a nanoemulsion formulation of Amphotericin B (AmB) for the treatment of skin candidiasis and aspergillosis. Several ingredients were selected on the basis of AmB solubility and compatibility with skin. The formulation that exhibited the best properties was selected from the pseudo-ternary phase diagram. After physicochemical characterization its stability was assessed. Drug release and skin permeation studies were also accomplished. The antifungal efficacy and skin tolerability of developed AmB nanoemulsion was demonstrated. Finally, our results showed that the developed AmB formulation could provide an effective local antifungal effect without theoretical systemic absorption, based on its skin retention capacity, which might avoid related side effect. These results suggested that the nanoemulsion may be an optimal therapeutic alternative for the treatment of skin fungal infections with AmB. Copyright © 2017 Elsevier Inc. All rights reserved.
Electrostatic forces in the Poisson-Boltzmann systems
NASA Astrophysics Data System (ADS)
Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray
2013-09-01
Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models.
Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.
Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong
2006-10-01
We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.
Replicator equations, maximal cliques, and graph isomorphism.
Pelillo, M
1999-11-15
We present a new energy-minimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear one-to-one correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the so-called replicator equations--a class of straightforward continuous- and discrete-time dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate mean-field annealing heuristics.
Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints
NASA Astrophysics Data System (ADS)
Sembiring, Pasukat
2017-12-01
Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.
Finite element model for MOI applications using A-V formulation
NASA Astrophysics Data System (ADS)
Xuan, L.; Shanker, B.; Udpa, L.; Shih, W.; Fitzpatrick, G.
2001-04-01
Magneto-optic imaging (MOI) is a relatively new sensor application of an extension of bubble memory technology to NDT and produce easy-to-interpret, real time analog images. MOI systems use a magneto-optic (MO) sensor to produce analog images of magnetic flux leakage from surface and subsurface defects. The instrument's capability in detecting the relatively weak magnetic fields associated with subsurface defects depends on the sensitivity of the magneto-optic sensor. The availability of a theoretical model that can simulate the MOI system performance is extremely important for optimization of the MOI sensor and hardware system. A nodal finite element model based on magnetic vector potential formulation has been developed for simulating MOI phenomenon. This model has been used for predicting the magnetic fields in simple test geometry with corrosion dome defects. In the case of test samples with multiple discontinuities, a more robust model using the magnetic vector potential Ā and electrical scalar potential V is required. In this paper, a finite element model based on A-V formulation is developed to model complex circumferential crack under aluminum rivets in dimpled countersink.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
Field theoretical prediction of a property of the tropical cyclone
NASA Astrophysics Data System (ADS)
Spineanu, F.; Vlad, M.
2014-01-01
The large scale atmospheric vortices (tropical cyclones, tornadoes) are complex physical systems combining thermodynamics and fluid-mechanical processes. The late phase of the evolution towards stationarity consists of the vorticity concentration, a well known tendency to self-organization , an universal property of the two-dimensional fluids. It may then be expected that the stationary state of the tropical cyclone has the same nature as the vortices of many other systems in nature: ideal (Euler) fluids, superconductors, Bose-Einsetin condensate, cosmic strings, etc. Indeed it was found that there is a description of the atmospheric vortex in terms of a classical field theory. It is compatible with the more conventional treatment based on conservation laws, but the field theoretical model reveals properties that are almost inaccessible to the conventional formulation: it identifies the stationary states as being close to self-duality. This is of highest importance: the self-duality is known to be the origin of all coherent structures known in natural systems. Therefore the field theoretical (FT) formulation finds that the cuasi-coherent form of the atmospheric vortex (tropical cyclone) at stationarity is an expression of this particular property. In the present work we examine a strong property of the tropical cyclone, which arises in the FT formulation in a natural way: the equality of the masses of the particles associated to the matter field and respectively to the gauge field in the FT model is translated into the equality between the maximum radial extension of the tropical cyclone and the Rossby radius. For the cases where the FT model is a good approximation we calculate characteristic quantities of the tropical cyclone and find good comparison with observational data.
Bananas, Doughnuts and Seismic Traveltimes
NASA Astrophysics Data System (ADS)
Dahlen, F. A.
2002-12-01
Most of what we know about the 3-D seismic heterogeneity of the mantle is based upon ray-theoretical traveltime tomography. In this infinite-frequency approximation, a measured traveltime anomaly depends only upon the wavespeed along an infinitesimally thin geometrical ray between a seismic source and a seismographic station. In this lecture I shall describe a new formulation of the seismic traveltime inverse problem which accounts for the ability of a finite-frequency wave to ``feel'' 3-D structure off of the source-receiver ray. Finite-frequency diffraction effects associated with this off-ray sensitivity act to ``heal'' the corrugations that develop in a wavefront propagating through a heterogeneous medium. Ray-theoretical tomography is based upon the premise that a seismic wave ``remembers'' all of the traveltime advances or delays that it accrues along its path, whereas actual finite-frequency waves ``forget''. I shall describe a number of recent analytical and numerical investigations, which have led to an improved theoretical understanding of this phenomenon.
Game Theoretic Resolution of Water Conflicts
NASA Astrophysics Data System (ADS)
Tyagi, H.; Gosain, A. K.; Khosa, R.
2017-12-01
Water disputes are of multi-disciplinary nature and involve an array of natural, hydrological,social, political and economic issues. Operations Research based decision making methodshave been found to facilitate mathematical analysis of such multifaceted problems thatconsist of multiple stakeholders and their conflicting objectives. Game Theoretic techniqueslike Metagame and Hypergame Analysis can provide a framework for conceptualizing waterconflicts and envisaging their potential solutions. In the present research, firstly a Metagamemodel has been developed to identify range of plausible equilibrium outcomes for resolvingconflicts pertaining to water apportionments in a transboundary watercourse. Further, it hasbeen observed that the contenders often hide their strategies from other players to getfavorable water allocations. Consequently, there are widespread misinterpretations about thetactics of the competitors and contenders have to formulate their strategies entirely based ontheir perception about others. Accordingly, a Hypergame study has also been conducted tomodel the probable misperceptions that may exist amongst the river riparians. Thus, thecurrent study assesses the efficacy of Game Theoretic techniques as possible redressalmechanism for water conflicts.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
Theoretical modeling of PEB procedure on EUV resist using FDM formulation
NASA Astrophysics Data System (ADS)
Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo
2018-03-01
Semiconductor manufacturing industry has reduced the size of wafer for enhanced productivity and performance, and Extreme Ultraviolet (EUV) light source is considered as a promising solution for downsizing. A series of EUV lithography procedures contain complex photo-chemical reaction on photoresist, and it causes technical difficulties on constructing theoretical framework which facilitates rigorous investigation of underlying mechanism. Thus, we formulated finite difference method (FDM) model of post exposure bake (PEB) process on positive chemically amplified resist (CAR), and it involved acid diffusion coupled-deprotection reaction. The model is based on Fick's second law and first-order chemical reaction rate law for diffusion and deprotection, respectively. Two kinetic parameters, diffusion coefficient of acid and rate constant of deprotection, which were obtained by experiment and atomic scale simulation were applied to the model. As a result, we obtained time evolutional protecting ratio of each functional group in resist monomer which can be used to predict resulting polymer morphology after overall chemical reactions. This achievement will be the cornerstone of multiscale modeling which provides fundamental understanding on important factors for EUV performance and rational design of the next-generation photoresist.
Aeroelastic stability analysis of a Darrieus wind turbine
NASA Astrophysics Data System (ADS)
Popelka, D.
1982-02-01
An aeroelastic stability analysis was developed for predicting flutter instabilities on vertical axis wind turbines. The analytical model and mathematical formulation of the problem are described as well as the physical mechanism that creates flutter in Darrieus turbines. Theoretical results are compared with measured experimental data from flutter tests of the Sandia 2 Meter turbine. Based on this comparison, the analysis appears to be an adequate design evaluation tool.
Statement on nursing: a personal perspective.
McCutcheon, Tonna
2004-01-01
Contemporary nursing is based on a conglomerate of theoretical nursing models. These models each incorporate four central concepts: person, health, environment, and nursing. By defining these concepts, nurses develop an individual framework from which they base their nursing practice. As an aspiring nurse practitioner in the gastroenterology field, I have retrospectively assessed my personal definitions of person, health, environment, and nursing. From these definitions, I am able to incorporate specific theoretical frameworks into my personal belief system, thus formulating a basis for my nursing practice. This foundation is comprised of the influence of nursing theorists Jean Watson, Sister Callista Roy, Kolcaba, Florence Nightingale, and Ida J. Orlando; the Perioperative Patient-Focused Model; Watson's Theory of Human Caring; theories regarding transpersonal human caring and healing; and feminist theories. Therefore, this article describes self-examination of nursing care by defining central nursing concepts, acknowledging the influence of nursing theorists and theories, and developing a personal framework from which I base my nursing practice.
A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.
ERIC Educational Resources Information Center
Chambers, Jay G.
This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…
First-principles definition and measurement of planetary electromagnetic-energy budget.
Mishchenko, Michael I; Lock, James A; Lacis, Andrew A; Travis, Larry D; Cairns, Brian
2016-06-01
The imperative to quantify the Earth's electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting-vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
First-principles definition and measurement of planetary electromagnetic-energy budget
NASA Astrophysics Data System (ADS)
Mishchenko, M. I.; James, L.; Lacis, A. A.; Travis, L. D.; Cairns, B.
2016-12-01
The imperative to quantify the Earth's electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this talk we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting-vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated concepts of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
First-Principles Definition and Measurement of Planetary Electromagnetic-Energy Budget
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Lock, James A.; Lacis, Andrew A.; Travis, Larry D.; Cairns, Brian
2016-01-01
The imperative to quantify the Earths electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting- vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
Nonlinear analysis of composite thin-walled helicopter blades
NASA Astrophysics Data System (ADS)
Kalfon, J. P.; Rand, O.
Nonlinear theoretical modeling of laminated thin-walled composite helicopter rotor blades is presented. The derivation is based on nonlinear geometry with a detailed treatment of the body loads in the axial direction which are induced by the rotation. While the in-plane warping is neglected, a three-dimensional generic out-of-plane warping distribution is included. The formulation may also handle varying thicknesses and mass distribution along the cross-sectional walls. The problem is solved by successive iterations in which a system of equations is constructed and solved for each cross-section. In this method, the differential equations in the spanwise directions are formulated and solved using a finite-differences scheme which allows simple adaptation of the spanwise discretization mesh during iterations.
Autonomous learning based on cost assumptions: theoretical studies and experiments in robot control.
Ribeiro, C H; Hemerly, E M
2000-02-01
Autonomous learning techniques are based on experience acquisition. In most realistic applications, experience is time-consuming: it implies sensor reading, actuator control and algorithmic update, constrained by the learning system dynamics. The information crudeness upon which classical learning algorithms operate make such problems too difficult and unrealistic. Nonetheless, additional information for facilitating the learning process ideally should be embedded in such a way that the structural, well-studied characteristics of these fundamental algorithms are maintained. We investigate in this article a more general formulation of the Q-learning method that allows for a spreading of information derived from single updates towards a neighbourhood of the instantly visited state and converges to optimality. We show how this new formulation can be used as a mechanism to safely embed prior knowledge about the structure of the state space, and demonstrate it in a modified implementation of a reinforcement learning algorithm in a real robot navigation task.
Analytical Expressions for Thermo-Osmotic Permeability of Clays
NASA Astrophysics Data System (ADS)
Gonçalvès, J.; Ji Yu, C.; Matray, J.-M.; Tremosa, J.
2018-01-01
In this study, a new formulation for the thermo-osmotic permeability of natural pore solutions containing monovalent and divalent cations is proposed. The mathematical formulation proposed here is based on the theoretical framework supporting thermo-osmosis which relies on water structure alteration in the pore space of surface-charged materials caused by solid-fluid electrochemical interactions. The ionic content balancing the surface charge of clay minerals causes a disruption in the hydrogen bond network when more structured water is present at the clay surface. Analytical expressions based on our heuristic model are proposed and compared to the available data for NaCl solutions. It is shown that the introduction of divalent cations reduces the thermo-osmotic permeability by one third compared to the monovalent case. The analytical expressions provided here can be used to advantage for safety calculations in deep underground nuclear waste repositories.
Formation of Virtual Organizations in Grids: A Game-Theoretic Approach
NASA Astrophysics Data System (ADS)
Carroll, Thomas E.; Grosu, Daniel
The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.
Conceptual Commitments of the LIDA Model of Cognition
NASA Astrophysics Data System (ADS)
Franklin, Stan; Strain, Steve; McCall, Ryan; Baars, Bernard
2013-06-01
Significant debate on fundamental issues remains in the subfields of cognitive science, including perception, memory, attention, action selection, learning, and others. Psychology, neuroscience, and artificial intelligence each contribute alternative and sometimes conflicting perspectives on the supervening problem of artificial general intelligence (AGI). Current efforts toward a broad-based, systems-level model of minds cannot await theoretical convergence in each of the relevant subfields. Such work therefore requires the formulation of tentative hypotheses, based on current knowledge, that serve to connect cognitive functions into a theoretical framework for the study of the mind. We term such hypotheses "conceptual commitments" and describe the hypotheses underlying one such model, the Learning Intelligent Distribution Agent (LIDA) Model. Our intention is to initiate a discussion among AGI researchers about which conceptual commitments are essential, or particularly useful, toward creating AGI agents.
Combustion characteristics of SMX and SMX based propellants
NASA Astrophysics Data System (ADS)
Reese, David A.
This work investigates the combustion of the new solid nitrate ester 2,3-hydroxymethyl-2,3-dinitro-1,4-butanediol tetranitrate (SMX, C6H 8N6O16). SMX was synthesized for the first time in 2008. It has a melting point of 85 °C and oxygen balance of 0% to CO 2, allowing it to be used as an energetic additive or oxidizer in solid propellants. In addition to its neat combustion characteristics, this work also explores the use of SMX as a potential replacement for nitroglycerin (NG) in double base gun propellants and as a replacement for ammonium perchlorate in composite rocket propellants. The physical properties, sensitivity characteristics, and combustion behaviors of neat SMX were investigated. Its combustion is stable at pressures of up to at least 27.5 MPa (n = 0.81). The observed flame structure is nearly identical to that of other double base propellant ingredients, with a primary flame attached at the surface, a thick isothermal dark zone, and a luminous secondary flame wherein final recombination reactions occur. As a result, the burning rate and primary flame structure can be modeled using existing one-dimensional steady state techniques. A zero gas-phase activation energy approximation results in a good fit between modeled and observed behavior. Additionally, SMX was considered as a replacement for nitroglycerin in a double base propellant. Thermochemical calculations indicate improved performance when compared with the common double base propellant JA2 at SMX loadings above 40 wt-%. Also, since SMX is a room temperature solid, migration may be avoided. Like other nitrate esters, SMX is susceptible to decomposition over long-term storage due to the presence of excess acid in the crystals; the addition of stabilizers (e.g., derivatives of urea) during synthesis should be sufficient to prevent this. the addition of Both unplasticized and plasticized propellants were formulated. Thermal analysis of unplasticized propellant showed a distinct melt-recrystallization curve, which indicates that a solid phase solution is being formed between SMX and NC, and that SMX would not act as plasticizer. Analysis of propellant prepared with diethyleneglycol dinitrate (DEGDN) plasticizer indicates that the SMX is likely dissolved in the DEGDN. The plasticized material also showed similar hardness and modulus to JA2. However, both plasticized and unplasticized propellants exhibited deconsolidated burning at elevated pressures due to the high modulus of the propellant. Increased amounts of plasticizer or improved processing of the nitrocellulose should be investigated to remedy this issue. Safety characterization showed that sensitivity of the plasticized propellant is similar to JA2. In short, replacing NG with SMX results in a new family of propellants with acceptable safety characteristics and which may also offer improved theoretical performance. Finally, composite propellants based on SMX were theoretically and experimentally examined and compared to formulations based on ammonium perchlorate (AP). Thermochemical equilibrium calculations show that aluminized SMX-based formulations can achieve theoretical sea level specific impulse values upwards of 260 s-- slightly lower than an AP-based composite. Both ignition sensitivity (tested via drop weight impact, electro-static discharge, and BAM friction) and physical properties (hardness and thermal properties) are comparable to those of the AP-based formulations. However, the SMX-based formulation could be detonated using a high explosive donor charge in contact with the propellant, as do other low smoke propellants. Differential scanning calorimetry of the SMX-based propellant indicated an exotherm onset of 140 °C, which corresponds to the known decomposition temperature of SMX. The propellant has a high burning rate of 1.57 cm/s at 6.89 MPa, with a pressure exponent of 0.85. This high pressure sensitivity might be addressed using various energetic and/or stabilizing additives. With high density and performance, smokeless combustion products, and stable combustion, SMX appears to be a viable replacement for existing energetic ingredients in a wide variety of propellant, explosive, and pyrotechnic applications.
NASA Astrophysics Data System (ADS)
Strom, C. S.; Bennema, P.
1997-03-01
A series of two articles discusses possible morphological evidence for oligomerization of growth units in the crystallization of tetragonal lysozyme, based on a rigorous graph-theoretic derivation of the F faces. In the first study (Part I), the growth layers are derived as valid networks satisfying the conditions of F slices in the context of the PBC theory using the graph-theoretic method implemented in program FFACE [C.S. Strom, Z. Krist. 172 (1985) 11]. The analysis is performed in monomeric and alternative tetrameric and octameric formulations of the unit cell, assuming tetramer formation according to the strongest bonds. F (flat) slices with thickness Rdhkl ( {1}/{2} < R ≤ 1 ) are predicted theoretically in the forms 1 1 0, 0 1 1, 1 1 1. The relevant energies are established in the broken bond model. The relation between possible oligomeric specifications of the unit cell and combinatorially feasible F slice compositions in these orientations is explored.
An improved method for predicting the effects of flight on jet mixing noise
NASA Technical Reports Server (NTRS)
Stone, J. R.
1979-01-01
The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Strength Property Estimation for Dry, Cohesionless Soils Using the Military Cone Penetrometer
1992-05-01
by Meier and Baladi (1988). Their methodology is based on a theoretical formulation of the CI problem using cavity expansion theory to relate cone... Baladi (1981), incorporates three mechanical properties (cohesion, fric- tion angle, and shear modulus) and the total unit weight. Obviously, these four...unknown soil propertieE cannot be back-calculated directly from a single CI measurement. To ameliorate this problem, Meier and Baladi estimate the total
Differential geometry based solvation model I: Eulerian formulation
NASA Astrophysics Data System (ADS)
Chen, Zhan; Baker, Nathan A.; Wei, G. W.
2010-11-01
This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the solvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By optimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second-order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature.
Differential geometry based solvation model I: Eulerian formulation
Chen, Zhan; Baker, Nathan A.; Wei, G. W.
2010-01-01
This paper presents a differential geometry based model for the analysis and computation of the equilibrium property of solvation. Differential geometry theory of surfaces is utilized to define and construct smooth interfaces with good stability and differentiability for use in characterizing the solvent-solute boundaries and in generating continuous dielectric functions across the computational domain. A total free energy functional is constructed to couple polar and nonpolar contributions to the salvation process. Geometric measure theory is employed to rigorously convert a Lagrangian formulation of the surface energy into an Eulerian formulation so as to bring all energy terms into an equal footing. By minimizing the total free energy functional, we derive coupled generalized Poisson-Boltzmann equation (GPBE) and generalized geometric flow equation (GGFE) for the electrostatic potential and the construction of realistic solvent-solute boundaries, respectively. By solving the coupled GPBE and GGFE, we obtain the electrostatic potential, the solvent-solute boundary profile, and the smooth dielectric function, and thereby improve the accuracy and stability of implicit solvation calculations. We also design efficient second order numerical schemes for the solution of the GPBE and GGFE. Matrix resulted from the discretization of the GPBE is accelerated with appropriate preconditioners. An alternative direct implicit (ADI) scheme is designed to improve the stability of solving the GGFE. Two iterative approaches are designed to solve the coupled system of nonlinear partial differential equations. Extensive numerical experiments are designed to validate the present theoretical model, test computational methods, and optimize numerical algorithms. Example solvation analysis of both small compounds and proteins are carried out to further demonstrate the accuracy, stability, efficiency and robustness of the present new model and numerical approaches. Comparison is given to both experimental and theoretical results in the literature. PMID:20938489
Schmalzl, Laura; Powers, Chivon; Henje Blom, Eva
2015-01-01
During recent decades numerous yoga-based practices (YBP) have emerged in the West, with their aims ranging from fitness gains to therapeutic benefits and spiritual development. Yoga is also beginning to spark growing interest within the scientific community, and yoga-based interventions have been associated with measureable changes in physiological parameters, perceived emotional states, and cognitive functioning. YBP typically involve a combination of postures or movement sequences, conscious regulation of the breath, and various techniques to improve attentional focus. However, so far little if any research has attempted to deconstruct the role of these different component parts in order to better understand their respective contribution to the effects of YBP. A clear operational definition of yoga-based therapeutic interventions for scientific purposes, as well as a comprehensive theoretical framework from which testable hypotheses can be formulated, is therefore needed. Here we propose such a framework, and outline the bottom-up neurophysiological and top-down neurocognitive mechanisms hypothesized to be at play in YBP. PMID:26005409
Schmalzl, Laura; Powers, Chivon; Henje Blom, Eva
2015-01-01
During recent decades numerous yoga-based practices (YBP) have emerged in the West, with their aims ranging from fitness gains to therapeutic benefits and spiritual development. Yoga is also beginning to spark growing interest within the scientific community, and yoga-based interventions have been associated with measureable changes in physiological parameters, perceived emotional states, and cognitive functioning. YBP typically involve a combination of postures or movement sequences, conscious regulation of the breath, and various techniques to improve attentional focus. However, so far little if any research has attempted to deconstruct the role of these different component parts in order to better understand their respective contribution to the effects of YBP. A clear operational definition of yoga-based therapeutic interventions for scientific purposes, as well as a comprehensive theoretical framework from which testable hypotheses can be formulated, is therefore needed. Here we propose such a framework, and outline the bottom-up neurophysiological and top-down neurocognitive mechanisms hypothesized to be at play in YBP.
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
Modeling NAPL dissolution from pendular rings in idealized porous media
NASA Astrophysics Data System (ADS)
Huang, Junqi; Christ, John A.; Goltz, Mark N.; Demond, Avery H.
2015-10-01
The dissolution rate of nonaqueous phase liquid (NAPL) often governs the remediation time frame at subsurface hazardous waste sites. Most formulations for estimating this rate are empirical and assume that the NAPL is the nonwetting fluid. However, field evidence suggests that some waste sites might be organic wet. Thus, formulations that assume the NAPL is nonwetting may be inappropriate for estimating the rates of NAPL dissolution. An exact solution to the Young-Laplace equation, assuming NAPL resides as pendular rings around the contact points of porous media idealized as spherical particles in a hexagonal close packing arrangement, is presented in this work to provide a theoretical prediction for NAPL-water interfacial area. This analytic expression for interfacial area is then coupled with an exact solution to the advection-diffusion equation in a capillary tube assuming Hagen-Poiseuille flow to provide a theoretical means of calculating the mass transfer rate coefficient for dissolution at the NAPL-water interface in an organic-wet system. A comparison of the predictions from this theoretical model with predictions from empirically derived formulations from the literature for water-wet systems showed a consistent range of values for the mass transfer rate coefficient, despite the significant differences in model foundations (water wetting versus NAPL wetting, theoretical versus empirical). This finding implies that, under these system conditions, the important parameter is interfacial area, with a lesser role played by NAPL configuration.
Dual algebraic formulation of differential GPS
NASA Astrophysics Data System (ADS)
Lannes, A.; Dur, S.
2003-05-01
A new approach to differential GPS is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory. The notion of double difference, which is related to that of closure in the sense of Kirchhoff, is revisited in this context. The Moore-Penrose pseudo-inverse of the closure operator plays a key role in the corresponding dual formulation. This approach, which is very attractive from a conceptual point of view, sheds a new light on the Teunissen formulation.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.
1977-01-01
The theoretical fundamentals and mathematical definitions for calculations involved with dual polarized radio links are given. Detailed derivations and results are discussed for several formulations applied to a general dual polarized radio link.
On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.
Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen
2018-04-01
In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.
2008-11-01
is particularly important in order to design a network that is realistically deployable. The goal of this project is the design of a theoretical ... framework to assess and predict the effectiveness and performance of networks and their loads.
NASA Astrophysics Data System (ADS)
Pombo, Claudia
2015-10-01
The art of memory started with Aristotle's questions on memory. During its long evolution, it had important contributions from alchemists, was transformed by Ramon Llull and apparently ended with Giordano Bruno, who was considered the best known representative of this art. This tradition did not disappear, but lives in the formulations of our modern scientific theories. From its initial form as a method of keeping information via associations, it became a principle of classification and structuring of knowledge. This principle, which we here name differentiation with stratification, is a structural design behind classical mechanics. Integrating two different traditions of science in one structure, this physical theory became the modern paradigm of science. In this paper, we show that this principle can also be formulated as a set of questions. This is done via an analysis of theories, based on the epistemology of observational realism. A combination of Rudolph Carnap's concept of theory as a system of observational and theoretical languages, with a criterion for separating observational languages, based on analytical psychology, shapes this epistemology. The `nuclear' role of the observational laws and the differentiations from these nucleus, reproducing the general cases of phenomena, reveals the memory art's heritage in the theories. Here in this paper we argue that this design is also present in special relativity and in quantum mechanics.
NASA Astrophysics Data System (ADS)
Rahmouni, Lyes; Mitharwal, Rajendra; Andriulli, Francesco P.
2017-11-01
This work presents two new volume integral equations for the Electroencephalography (EEG) forward problem which, differently from the standard integral approaches in the domain, can handle heterogeneities and anisotropies of the head/brain conductivity profiles. The new formulations translate to the quasi-static regime some volume integral equation strategies that have been successfully applied to high frequency electromagnetic scattering problems. This has been obtained by extending, to the volume case, the two classical surface integral formulations used in EEG imaging and by introducing an extra surface equation, in addition to the volume ones, to properly handle boundary conditions. Numerical results corroborate theoretical treatments, showing the competitiveness of our new schemes over existing techniques and qualifying them as a valid alternative to differential equation based methods.
Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi
2018-01-01
In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakli, Hedi; Benzina, Hafedh; Aguili, Taoufik; Tao, Jun Wu
2009-08-01
This paper is an analysis of rectangular waveguide completely full of ferrite magnetized longitudinally. The analysis is based on the formulation of the transverse operator method (TOM), followed by the application of the Galerkin method. We obtain an eigenvalue equation system. The propagation constant of some homogenous and anisotropic waveguide structures with ferrite has been obtained. The results presented here show that the transverse operator formulation is not only an elegant theoretical form, but also a powerful and efficient analysis method, it is useful to solve a number of the propagation problems in electromagnetic. One advantage of this method is that it presents a fast convergence. Numerical examples are given for different cases and compared with the published results. A good agreement is obtained.
Revision of Paschen's Law Relating to the ESD of Aerospace Vehicle Surfaces
NASA Technical Reports Server (NTRS)
Hogue, Michael D.; Cox, Rachel E.; Mulligan, Jaysen; Kapat, Jayanta; Ahmed, Kareem; Wilson, Jennifer G.; Calle, Luz M.
2017-01-01
The purpose of this work is to develop a version of Paschen's law that takes into account the flow of ambient gas past electrode surfaces. Paschen's law does not consider the flow of gas past an aerospace vehicle whose surfaces may be triboelectrically charged by dust or ice crystal impingement while traversing the atmosphere. The basic hypothesis of this work is that the number of electron-ion pairs created per unit distance between electrode surfaces is mitigated by the electron-ion pairs removed per unit distance by the flow of gas. The revised theoretical model must be a function of the mean velocity vxm of the ambient gas and reduce to Paschen's law when the mean velocity is zero. A new theoretical formulation of Paschen's law, taking into account the Mach number and compressible dynamic pressure, derived by the authors, will be discussed. This equation has been evaluated by wind tunnel experimentation. Initial data of the baseline wind tunnel experiments show results consistent with the hypothesis. This work may enhance the safety of aerospace vehicles through a redefinition of electrostatic launch commit criteria. It is also possible for new products, such as antistatic coatings, to be formulated based on this data.
Revision of Paschen's Law Relating to the ESD of Aerospace Vehicle Surfaces
NASA Technical Reports Server (NTRS)
Hogue, Michael D.; Cox, Rachel E.; Mulligan, Jaysen; Kapat, Jayanta; Ahmed, Kareem; Wilson, Jennifer G.; Calle, Luz M.
2017-01-01
The purpose of this work is to develop a version of Paschens law that takes into account the flow of ambient gas past electrode surfaces. Paschens law does not consider the flow of gas past an aerospace vehicle whose surfaces may be triboelectrically charged by dust or ice crystal impingement while traversing the atmosphere. The basic hypothesis of this work is that the number of electron-ion pairs created per unit distance between electrode surfaces is mitigated by the electron-ion pairs removed per unit distance by the flow of gas. The revised theoretical model must be a function of the mean velocity vxm of the ambient gas and reduce to Paschens law when the mean velocity is zero. A new theoretical formulation of Paschens law, taking into account the Mach number and compressible dynamic pressure, derived by the authors, will be discussed. This equation has been evaluated by wind tunnel experimentation. Initial data of the baseline wind tunnel experiments show results consistent with the hypothesis. This work may enhance the safety of aerospace vehicles through a redefinition of electrostatic launch commit criteria. It is also possible for new products, such as antistatic coatings, to be formulated based on this data.
Resistance formulas in hydraulics-based models for routing debris flows
Chen, Cheng-lung; Ling, Chi-Hai
1997-01-01
The one-dimensional, cross-section-averaged flow equations formulated for routing debris flows down a narrow valley are identical to those for clear-water flow, except for the differences in the values of the flow parameters, such as the momentum (or energy) correction factor, resistance coefficient, and friction slope. Though these flow parameters for debris flow in channels with cross-sections of arbitrary geometric shape can only be determined empirically, the theoretical values of such parameters for debris flow in wide channels exist. This paper aims to derive the theoretical resistance coefficient and friction slope for debris flow in wide channels using a rheological model for highly-concentrated, rapidly-sheared granular flows, such as the generalized viscoplastic fluid (GVF) model. Formulating such resistance coefficient or friction slope is equivalent to developing a generally applicable resistance formula for routing debris flows. Inclusion of a nonuniform term in the expression of the resistance formula proves useful in removing the customary assumption that the spatially varied resistance at any section is equal to what would take place with the same rate of flow passing the same section under conditions of uniformity. This in effect implies an improvement in the accuracy of unsteady debris-flow computation.
Efficient solid rocket propulsion for access to space
NASA Astrophysics Data System (ADS)
Maggi, Filippo; Bandera, Alessio; Galfetti, Luciano; De Luca, Luigi T.; Jackson, Thomas L.
2010-06-01
Space launch activity is expected to grow in the next few years in order to follow the current trend of space exploitation for business purpose. Granting high specific thrust and volumetric specific impulse, and counting on decades of intense development, solid rocket propulsion is a good candidate for commercial access to space, even with common propellant formulations. Yet, some drawbacks such as low theoretical specific impulse, losses as well as safety issues, suggest more efficient propulsion systems, digging into the enhancement of consolidated techniques. Focusing the attention on delivered specific impulse, a consistent fraction of losses can be ascribed to the multiphase medium inside the nozzle which, in turn, is related to agglomeration; a reduction of agglomerate size is likely. The present paper proposes a model based on heterogeneity characterization capable of describing the agglomeration trend for a standard aluminized solid propellant formulation. Material microstructure is characterized through the use of two statistical descriptors (pair correlation function and near-contact particles) looking at the mean metal pocket size inside the bulk. Given the real formulation and density of a propellant, a packing code generates the material representative which is then statistically analyzed. Agglomerate predictions are successfully contrasted to experimental data at 5 bar for four different formulations.
[Euthanasia and the paradoxes of autonomy].
Siqueira-Batista, Rodrigo; Schramm, Fermin Roland
2008-01-01
The principle of respect for autonomy has proved very useful for bioethical arguments in favor of euthanasia. However unquestionable its theoretical efficacy, countless aporiae can be raised when conducting a detailed analysis of this concept, probably checkmating it. Based on such considerations, this paper investigates the principle of autonomy, starting with its origins in Greek and Christian traditions, and then charting some of its developments in Western cultures through to its modern formulation, a legacy of Immanuel Kant. The main paradoxes of this concept are then presented in the fields of philosophy, biology, psychoanalysis and politics, expounding several of the theoretical difficulties to be faced in order to make its applicability possible within the scope of decisions relating to the termination of life.
Thermal transmission of camouflage nets revisited
NASA Astrophysics Data System (ADS)
Jersblad, Johan; Jacobs, Pieter
2016-10-01
In this article we derive, from first principles, the correct formula for thermal transmission of a camouflage net, based on the setup described in the US standard for lightweight camouflage nets. Furthermore, we compare the results and implications with the use of an incorrect formula that have been seen in several recent tenders. It is shown that the incorrect formulation not only gives rise to large errors, but the result also depends on the surrounding room temperature, which in the correct derivation cancels out. The theoretical results are compared with laboratory measurements. The theoretical results agree with the laboratory results for the correct derivation. To summarize we discuss the consequences for soldiers on the battlefield if incorrect standards and test methods are used in procurement processes.
Target Coverage in Wireless Sensor Networks with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao
2016-01-01
Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902
A "Networked-Hutong Siwei of Critiques" for Critical Teacher Education
ERIC Educational Resources Information Center
Qi, Jing
2014-01-01
This paper offers a conceptual basis for refashioning the formulation of critical teacher education. It argues that current critical teacher education is uncritically constructed upon key theoretical departures from critical theories. Drawing on Boltanski's critique of critical theories, the paper examines the ways these theoretical departures…
1990-08-01
evidence for a surprising degree of long-term skill retention. We formulated a theoretical framework , focusing on the importance of procedural reinstatement...considerable forgetting over even relatively short retention intervals. We have been able to place these studies in the same general theoretical framework developed
Commognition as a Lens for Research
ERIC Educational Resources Information Center
Presmeg, Norma
2016-01-01
This paper is a commentary on the theoretical formulations of the five empirical papers in this special issue. All five papers use aspects of the theory of commognition as presented by Anna Sfard; however, even when the same notions (e.g., rituals or explorations) are incorporated into theoretical frameworks undergirding the research, these…
Competence and Drug Use: Theoretical Frameworks, Empirical Evidence and Measurement.
ERIC Educational Resources Information Center
Lindenberg, Cathy Strachan; Solorzano, Rosa; Kelley, Maureen; Darrow, Vicki; Gendrop, Sylvia C.; Strickland, Ora
1998-01-01
Discusses the Social Stress Model of Substance Abuse. Summarizes theoretical and conceptual formulations for the construct of competence, reviews empirical evidence for the association of competence with drug use, and describes the preliminary development of a multiscale instrument designed to assess drug-protective competence among low-income…
The Psychiatric Cultural Formulation: Applying Medical Anthropology in Clinical Practice
Aggarwal, Neil Krishan
2014-01-01
This paper considers revisions to the DSM-IV Outline for Cultural Formulation from the perspective of clinical practice. First, the paper explores the theoretical development of the Cultural Formulation. Next, a case presentation demonstrates challenges in its actual implementation. Finally, the paper recommends a set of questions for the clinician on barriers to care and countertransference. The development of a standardized, user-friendly format can increase the Cultural Formulation’s utilization among all psychiatrists beyond those specializing in cultural psychiatry. PMID:22418398
Gang, Wei-juan; Wang, Xin; Wang, Fang; Dong, Guo-feng; Wu, Xiao-dong
2015-08-01
The national standard of "Regulations of Acupuncture-needle Manipulating Techniques" is one of the national Criteria of Acupuncturology for which a total of 22 items have been already established. In the process of formulation, a series of common and specific problems have been met. In the present paper, the authors expound these problems from 3 aspects, namely principles for formulation, methods for formulating criteria, and considerations about some problems. The formulating principles include selection and regulations of principles for technique classification and technique-related key factors. The main methods for formulating criteria are 1) taking the literature as the theoretical foundation, 2) taking the clinical practice as the supporting evidence, and 3) taking the expounded suggestions or conclusions through peer review.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, Marta; Bochev, Pavel Blagoveston
2017-01-01
In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.
A noise model for the evaluation of defect states in solar cells
Landi, G.; Barone, C.; Mauro, C.; Neitzert, H. C.; Pagano, S.
2016-01-01
A theoretical model, combining trapping/detrapping and recombination mechanisms, is formulated to explain the origin of random current fluctuations in silicon-based solar cells. In this framework, the comparison between dark and photo-induced noise allows the determination of important electronic parameters of the defect states. A detailed analysis of the electric noise, at different temperatures and for different illumination levels, is reported for crystalline silicon-based solar cells, in the pristine form and after artificial degradation with high energy protons. The evolution of the dominating defect properties is studied through noise spectroscopy. PMID:27412097
Conditions for quantum interference in cognitive sciences.
Yukalov, Vyacheslav I; Sornette, Didier
2014-01-01
We present a general classification of the conditions under which cognitive science, concerned, e.g. with decision making, requires the use of quantum theoretical notions. The analysis is done in the frame of the mathematical approach based on the theory of quantum measurements. We stress that quantum effects in cognition can arise only when decisions are made under uncertainty. Conditions for the appearance of quantum interference in cognitive sciences and the conditions when interference cannot arise are formulated. Copyright © 2013 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Chang, Ching L.; Jiang, Bo-Nan
1990-01-01
A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.
Finite Volume Method for Pricing European Call Option with Regime-switching Volatility
NASA Astrophysics Data System (ADS)
Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM
2018-03-01
In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Moses; Qin, Hong; Gilson, Erik
2013-01-01
By extending the recently developed generalized Courant-Snyder theory for coupled transverse beam dynamics, we have constructed the Gaussian beam distribution and its projections with arbitrary mode emittance ratios. The new formulation has been applied to a continuously-rotating quadrupole focusing channel because the basic properties of this channel are known theoretically and could also be investigated experimentally in a compact setup such as the linear Paul trap configuration. The new formulation retains a remarkably similar mathematical structure to the original Courant-Snyder theory, and thus provides a powerful theoretical tool to investigate coupled transverse beam dynamics in general and more complex linearmore » focusing channels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Moses; Qin, Hong; Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026
2013-08-15
By extending the recently developed generalized Courant-Snyder theory for coupled transverse beam dynamics, we have constructed the Gaussian beam distribution and its projections with arbitrary mode emittance ratios. The new formulation has been applied to a continuously rotating quadrupole focusing channel because the basic properties of this channel are known theoretically and could also be investigated experimentally in a compact setup such as the linear Paul trap configuration. The new formulation retains a remarkably similar mathematical structure to the original Courant-Snyder theory, and thus, provides a powerful theoretical tool to investigate coupled transverse beam dynamics in general and more complexmore » linear focusing channels.« less
Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach
2014-01-01
Paper DS-14-1028 to appear in the Special Issue on Stochastic Models, Control and Algorithms in Robotics, ASME Journal of Dynamic Systems...Measurement and Control Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach⋆ Devesh K. Jha† Yue Li† Thomas A. Wettergren‡† Asok...algorithm, called ν⋆, that was formulated in the framework of probabilistic finite state automata (PFSA) and language measure from a control -theoretic
Optimization Techniques for Analysis of Biological and Social Networks
2012-03-28
analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational
The composition of heterogeneous control laws
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Astrom, Karl
1991-01-01
The fuzzy control literature and industrial practice provide certain nonlinear methods for combining heterogeneous control laws, but these methods have been very difficult to analyze theoretically. An alternate formulation and extension of this approach is presented that has several practical and theoretical benefits. An example of heterogeneous control is given and two alternate analysis methods are presented.
Roethlisberger, Dieter; Mahler, Hanns-Christian; Altenburger, Ulrike; Pappenberger, Astrid
2017-02-01
Parenteral products should aim toward being isotonic and euhydric (physiological pH). Yet, due to other considerations, this goal is often not reasonable or doable. There are no clear allowable ranges related to pH and osmolality, and thus, the objective of this review was to provide a better understanding of acceptable formulation pH, buffer strength, and osmolality taking into account the administration route (i.e., intramuscular, intravenous, subcutaneous) and administration technique (i.e., bolus, push, infusion). This evaluation was based on 3 different approaches: conventional, experimental, and parametric. The conventional way of defining formulation limits was based on standard pH and osmolality ranges. Experimental determination of titratable acidity or in vitro hemolysis testing provided additional drug product information. Finally, the parametric approach was based on the calculation of theoretical values such as (1) the maximal volume of injection which cannot shift the blood's pH or its molarity out of the physiological range and (b) a dilution ratio at the injection site and by verifying that threshold values are not exceeded. The combination of all 3 approaches can support the definition of acceptable pH, buffer strength, and osmolality of formulations and thus may reduce the risk of failure during preclinical and clinical development. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
2009-01-01
My experiences as a mentor of young investigators, along with conversations with a diverse pool of mentees, led me to question the ability of conventional research methods, problem formulation, and instruments to address the unique challenges of studying racial and ethnic minorities. Training of new investi-gators should prepare them to explore alternative research paradigms and atypical research strategies, such as community-based participatory research and Photovoice technique. Unconventional approaches to research may challenge common explanations for unmet needs, noncompliance with treatments, and poor service outcomes. Mentors may need to develop broader theoretical insights that will facilitate unconventional problem formulation. The teaching of scientific research and mentoring of young investigators who study minority populations should evolve along with the changing research environment. PMID:19246670
NASA Astrophysics Data System (ADS)
Lumentut, M. F.; Howard, I. M.
2013-03-01
Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.
Dust remobilization in fusion plasmas under steady state conditions
NASA Astrophysics Data System (ADS)
Tolias, P.; Ratynskaia, S.; De Angeli, M.; De Temmerman, G.; Ripamonti, D.; Riva, G.; Bykov, I.; Shalpegin, A.; Vignitchouk, L.; Brochard, F.; Bystrov, K.; Bardin, S.; Litnovsky, A.
2016-02-01
The first combined experimental and theoretical studies of dust remobilization by plasma forces are reported. The main theoretical aspects of remobilization in fusion devices under steady state conditions are analyzed. In particular, the dominant role of adhesive forces is highlighted and generic remobilization conditions—direct lift-up, sliding, rolling—are formulated. A novel experimental technique is proposed, based on controlled adhesion of dust grains on tungsten samples combined with detailed mapping of the dust deposition profile prior and post plasma exposure. Proof-of-principle experiments in the TEXTOR tokamak and the EXTRAP-T2R reversed-field pinch are presented. The versatile environment of the linear device Pilot-PSI allowed for experiments with different magnetic field topologies and varying plasma conditions that were complemented with camera observations.
Matas, Antonio J; Sanz, María José; Heredia, Antonio
2003-11-01
The main component presents in the epicuticular waxes of needles of Pinus halepensis and the most of conifers, the secondary alcohol nonacosan-10-ol, has been investigated by X-ray diffraction and differential scanning calorimetry. The results obtained from these physical techniques permitted to establish a definitive structural model of the molecular arrangement of this wax, basically in good agreement with the model formulated by other authors from theoretical formulations. Biological implications of the proposed structure have been also formulated.
Xu, Xinxing; Li, Wen; Xu, Dong
2015-12-01
In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.
Dynamics and control of DNA sequence amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marimuthu, Karthikeyan; Chakrabarti, Raj, E-mail: raj@pmc-group.com, E-mail: rajc@andrew.cmu.edu; Division of Fundamental Research, PMC Advanced Technology, Mount Laurel, New Jersey 08054
2014-10-28
DNA amplification is the process of replication of a specified DNA sequence in vitro through time-dependent manipulation of its external environment. A theoretical framework for determination of the optimal dynamic operating conditions of DNA amplification reactions, for any specified amplification objective, is presented based on first-principles biophysical modeling and control theory. Amplification of DNA is formulated as a problem in control theory with optimal solutions that can differ considerably from strategies typically used in practice. Using the Polymerase Chain Reaction as an example, sequence-dependent biophysical models for DNA amplification are cast as control systems, wherein the dynamics of the reactionmore » are controlled by a manipulated input variable. Using these control systems, we demonstrate that there exists an optimal temperature cycling strategy for geometric amplification of any DNA sequence and formulate optimal control problems that can be used to derive the optimal temperature profile. Strategies for the optimal synthesis of the DNA amplification control trajectory are proposed. Analogous methods can be used to formulate control problems for more advanced amplification objectives corresponding to the design of new types of DNA amplification reactions.« less
Radiation and scattering from bodies of translation, volume 1
NASA Astrophysics Data System (ADS)
Medgyesi-Mitschang, L. N.
1980-04-01
An analytical formulation, based on the method of moments (MM) is described for solving electromagnetic problems associated with finite-length cylinders of arbitrary cross section, denoted in this report as bodies of translation (BOT). This class of bodies can be used to model structures with noncircular cross sections such as wings, fins, and aircraft fuselages. The theoretical development parallels in part the MM formulation developed earlier by Mautz and Harrington for bodies of revolution (BOR). Like the latter approach, a modal expansion is used to describe the unknown surface currents on the BOT. The present analysis has been developed to treat the far-field radiation and scattering from a BOT excited by active antennas or illuminated by a plane wave of arbitrary polarization and angle of incidence. In addition, the electric and magnetic near-field components are determined in the vicinity of active and passive apertures (slots). Using the Schelkunoff equivalence theorem, the aperture-coupled fields within a BOT are also obtained. The formulation has been implemented by a computer algorithm and validated using accepted data in the literature.
Yu, Wenjun; Ma, Mingyue; Chen, Xuemei; Min, Jiayu; Li, Lingru; Zheng, Yanfei; Li, Yingshuai; Wang, Ji; Wang, Qi
2017-01-01
Traditional Chinese medicine (TCM), Japanese-Chinese medicine, and Korean Sasang constitutional medicine have common origins. However, the constitutional medicines of China, Japan, and Korea differ because of the influence of geographical culture, social environment, national practices, and other factors. This paper aimed to compare the constitutional medicines of China, Japan, and Korea in terms of theoretical origin, constitutional classification, constitution and pathogenesis, clinical applications and basic studies that were conducted. The constitutional theories of the three countries are all derived from the Canon of Internal Medicine or Treatise on Febrile and Miscellaneous Diseases of Ancient China. However, the three countries have different constitutional classifications and criteria. Medical sciences in the three countries focus on the clinical applications of constitutional theory. They all agree that different pathogenic laws that guide the treatment of diseases govern different constitutions; thus, patients with different constitutions are treated differently. The three countries also differ in terms of drug formulations and medication. Japanese medicine is prescribed only based on constitution. Korean medicine is based on treatment, in which drugs cannot be mixed. TCM synthesize the treatment model of constitution differentiation, disease differentiation and syndrome differentiation with the treatment thought of treating disease according to three categories of etiologic factors, which reflect the constitution as the characteristic of individual precision treatment. In conclusion, constitutional medicines of China, Japan, and Korea have the same theoretical origin, but differ in constitutional classification, clinical application of constitutional theory on the treatment of diseases, drug formulations and medication.
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2017-08-01
Distributed radar network systems have been shown to have many unique features. Due to their advantage of signal and spatial diversities, radar networks are attractive for target detection. In practice, the netted radars in radar networks are supposed to maximize their transmit power to achieve better detection performance, which may be in contradiction with low probability of intercept (LPI). Therefore, this paper investigates the problem of adaptive power allocation for radar networks in a cooperative game-theoretic framework such that the LPI performance can be improved. Taking into consideration both the transmit power constraints and the minimum signal to interference plus noise ratio (SINR) requirement of each radar, a cooperative Nash bargaining power allocation game based on LPI is formulated, whose objective is to minimize the total transmit power by optimizing the power allocation in radar networks. First, a novel SINR-based network utility function is defined and utilized as a metric to evaluate power allocation. Then, with the well-designed network utility function, the existence and uniqueness of the Nash bargaining solution are proved analytically. Finally, an iterative Nash bargaining algorithm is developed that converges quickly to a Pareto optimal equilibrium for the cooperative game. Numerical simulations and theoretic analysis are provided to evaluate the effectiveness of the proposed algorithm.
2017-07-21
Technology Branch (RVSW) is conducting a first time experimental and theoretical investigation focused on evaluating new physical phenomena in the quasi ...bandgap energy, are formulated in our microscopic model for explaining the experimentally observed enhancements in both conduction- and valence... experimental and theoretical study on the nature of carrier transport, of both electrons and holes, through narrow constricted crystalline Si “wall
Taniguchi, Chika; Kawabata, Yohei; Wada, Koichi; Yamada, Shizuo; Onoue, Satomi
2014-04-01
Drug release and oral absorption of drugs with pH-dependent solubility are influenced by the conditions in the gastrointestinal tract. In some cases, poor oral absorption has been observed for these drugs, causing insufficient drug efficacy. The pH-modification of a formulation could be a promising approach to overcome the poor oral absorption of drugs with pH-dependent solubility. The present review aims to summarize the pH-modifier approach and strategic analyses of microenvironmental pH for formulation design and development. We also provide literature- and patent-based examples of the application of pH-modification technology to solid dosage forms. For the pH-modification approach, the microenvironmental pH at the diffusion area can be altered by dissolving pH-modifying excipients in the formulation. The modulation of the microenvironmental pH could improve dissolution behavior of drugs with pH-dependent solubility, possibly leading to better oral absorption. According to this concept, the modulated level of microenvironmental pH and its duration can be key factors for improvement in drug dissolution. The measurement of microenvironmental pH and release of pH-modifier would provide theoretical insight for the selection of an appropriate pH-modifier and optimization of the formulation.
Teaching evidence-based medicine using a problem-oriented approach.
Hosny, Somaya; Ghaly, Mona S
2014-04-01
Faculty of Medicine, Suez Canal University is adopting an innovative curriculum. Evidence-based medicine (EBM) has been integrated into problem based learning (PBL) sessions as a responsive innovative paradigm for the practice and teaching of clinical medicine. To integrate EBM in the problem based sessions of the sixth-year students, and to assess students' and tutor satisfaction with this change. EBM training was conducted for sixth-year students (196) including four theoretical, and eight practical sessions. Sixteen EBM educational scenarios (problems) were formulated, according to sixth-year curriculum. Each problem was discussed in two sessions through steps of EBM, namely: formulating PICO questions, searching for and appraising evidence, applying the evidence to the clinical scenario and analysing the practice. Students and tutors satisfaction were evaluated using a 3-point ratings questionnaire. The majority of students and faculty expressed their satisfaction about integrating EBM with PBL and agreed that the problems were more stimulating. However, 33.6% of students indicated that available time was insufficient for searching literatures. Integrating EBM into PBL sessions tends to be more interesting and stimulating than traditional PBL sessions for final year students and helps them to practice and implement EBM in clinical context.
NASA Technical Reports Server (NTRS)
Hu, Fang; Pizzo, Michelle E.; Nark, Douglas M.
2017-01-01
It has been well-known that under the assumption of a constant uniform mean flow, the acoustic wave propagation equation can be formulated as a boundary integral equation, in both the time domain and the frequency domain. Compared with solving partial differential equations, numerical methods based on the boundary integral equation have the advantage of a reduced spatial dimension and, hence, requiring only a surface mesh. However, the constant uniform mean flow assumption, while convenient for formulating the integral equation, does not satisfy the solid wall boundary condition wherever the body surface is not aligned with the uniform mean flow. In this paper, we argue that the proper boundary condition for the acoustic wave should not have its normal velocity be zero everywhere on the solid surfaces, as has been applied in the literature. A careful study of the acoustic energy conservation equation is presented that shows such a boundary condition in fact leads to erroneous source or sink points on solid surfaces not aligned with the mean flow. A new solid wall boundary condition is proposed that conserves the acoustic energy and a new time domain boundary integral equation is derived. In addition to conserving the acoustic energy, another significant advantage of the new equation is that it is considerably simpler than previous formulations. In particular, tangential derivatives of the solution on the solid surfaces are no longer needed in the new formulation, which greatly simplifies numerical implementation. Furthermore, stabilization of the new integral equation by Burton-Miller type reformulation is presented. The stability of the new formulation is studied theoretically as well as numerically by an eigenvalue analysis. Numerical solutions are also presented that demonstrate the stability of the new formulation.
A Surface Formulation for Characteristic Modes of Material Bodies
1974-10-01
42 CHAPTER 3 4: CHARACTERISTIC MODES - A SURFACE FORMULATION 3.1 Theoretical Development The treatment of characteristic modes for perfectly...cgs* i + y mp ein•£ (A6 V; 1 TP At • CA6 I --- 4 1 o#i ajk(X MPcoeo* + umpsin# ) Iim n p-l1 Tp -Ax sin#i + Ay co* ] i (A-7) A4 APPWOIX II fill I vIal
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
NASA Technical Reports Server (NTRS)
Rosenfeld, D.; Alterovitz, S. A.
1994-01-01
A theoretical study of the effects of the strain on the base properties of ungraded and compositional-graded n-p-n SiGe Heterojunction Bipolar Transistors (HBT) is presented. The dependencies of the transverse hole mobility and longitudinal electron mobility upon strain, composition and doping, are formulated using published Monte-Carlo data and, consequently, the base resistance and transit time are modeled and calculated. The results are compared to results obtained using common formulas that ignore these dependencies. The differences between the two sets of results are shown. The paper's conclusion is that for the design, analysis and optimization of high frequency SiGe HBTs the strain effects on the base properties cannot be ignored.
Verification of Gyrokinetic codes: theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia
2016-10-01
In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.
Nonrecursive formulations of multibody dynamics and concurrent multiprocessing
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Menon, Ramesh
1993-01-01
Since the late 1980's, research in recursive formulations of multibody dynamics has flourished. Historically, much of this research can be traced to applications of low dimensionality in mechanism and vehicle dynamics. Indeed, there is little doubt that recursive order N methods are the method of choice for this class of systems. This approach has the advantage that a minimal number of coordinates are utilized, parallelism can be induced for certain system topologies, and the method is of order N computational cost for systems of N rigid bodies. Despite the fact that many authors have dismissed redundant coordinate formulations as being of order N(exp 3), and hence less attractive than recursive formulations, we present recent research that demonstrates that at least three distinct classes of redundant, nonrecursive multibody formulations consistently achieve order N computational cost for systems of rigid and/or flexible bodies. These formulations are as follows: (1) the preconditioned range space formulation; (2) penalty methods; and (3) augmented Lagrangian methods for nonlinear multibody dynamics. The first method can be traced to its foundation in equality constrained quadratic optimization, while the last two methods have been studied extensively in the context of coercive variational boundary value problems in computational mechanics. Until recently, however, they have not been investigated in the context of multibody simulation, and present theoretical questions unique to nonlinear dynamics. All of these nonrecursive methods have additional advantages with respect to recursive order N methods: (1) the formalisms retain the highly desirable order N computational cost; (2) the techniques are amenable to concurrent simulation strategies; (3) the approaches do not depend upon system topology to induce concurrency; and (4) the methods can be derived to balance the computational load automatically on concurrent multiprocessors. In addition to the presentation of the fundamental formulations, this paper presents new theoretical results regarding the rate of convergence of order N constraint stabilization schemes associated with the newly introduced class of methods.
Theory of viscous transonic flow over airfoils at high Reynolds number
NASA Technical Reports Server (NTRS)
Melnik, R. E.; Chow, R.; Mead, H. R.
1977-01-01
This paper considers viscous flows with unseparated turbulent boundary layers over two-dimensional airfoils at transonic speeds. Conventional theoretical methods are based on boundary layer formulations which do not account for the effect of the curved wake and static pressure variations across the boundary layer in the trailing edge region. In this investigation an extended viscous theory is developed that accounts for both effects. The theory is based on a rational analysis of the strong turbulent interaction at airfoil trailing edges. The method of matched asymptotic expansions is employed to develop formal series solutions of the full Reynolds equations in the limit of Reynolds numbers tending to infinity. Procedures are developed for combining the local trailing edge solution with numerical methods for solving the full potential flow and boundary layer equations. Theoretical results indicate that conventional boundary layer methods account for only about 50% of the viscous effect on lift, the remaining contribution arising from wake curvature and normal pressure gradient effects.
Neutron die-away experiment for remote analysis of the surface of the moon and the planets, phase 3
NASA Technical Reports Server (NTRS)
Mills, W. R.; Allen, L. S.
1972-01-01
Continuing work on the two die-away measurements proposed to be made in the combined pulsed neutron experiment (CPNE) for analysis of lunar and planetary surfaces is described. This report documents research done during Phase 3. A general exposition of data analysis by the least-squares method and the related problem of the prediction of variance is given. A data analysis procedure for epithermal die-away data has been formulated. In order to facilitate the analysis, the number of independent material variables has been reduced to two: the hydrogen density and an effective oxygen density, the latter being determined uniquely from the nonhydrogeneous elemental composition. Justification for this reduction in the number of variables is based on a set of 27 new theoretical calculations. Work is described related to experimental calibration of the epithermal die-away measurement. An interim data analysis technique based solely on theoretical calculations seems to be adequate and will be used for future CPNE field tests.
COMBINATION OF DENSITY AND ENERGY MODULATION IN MICROBUNCHING ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Cheng Ying; Li, Rui
2016-05-01
Microbunching instability (MBI) has been one of the most challenging issues in the transport of high-brightness electron beams for modern recirculating or energy recovery linac machines. Recently we have developed and implemented a Vlasov solver [1] to calculate the microbunching gain for an arbitrary beamline lattice, based on the extension of existing theoretical formulation [2-4] for the microbunching amplification from an initial density perturbation to the final density modulation. For more thorough analyses, in addition to the case of (initial) density to (final) density amplification, we extend in this paper the previous formulation to more general cases, including energy tomore » density, density to energy and energy to energy amplifications for a recirculation machine. Such semi-analytical formulae are then incorporated into our Vlasov solver, and qualitative agreement is obtained when the semi-analytical Vlasov results are compared with particle tracking simulation using ELEGANT [5].« less
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Modulation analysis of nonlinear beam refraction at an interface in liquid crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assanto, Gaetano; Smyth, Noel F.; Xia Wenjun
2011-09-15
A theoretical investigation of solitary wave refraction in nematic liquid crystals is undertaken. A modulation theory based on a Lagrangian formulation of the governing optical solitary wave equations is developed. The resulting low-dimensional equations are found to give solutions in excellent agreement with full numerical solutions of the governing equations, as well as with previous experimental studies. The analysis deals with a number of types of refraction from a more to a less optically dense medium, the most famous being the Goos-Haenchen shift upon total internal reflection.
NASA Astrophysics Data System (ADS)
Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua
2017-07-01
Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.
ERIC Educational Resources Information Center
Kondratieva, Margo; Winsløw, Carl
2018-01-01
We present a theoretical approach to the problem of the transition from Calculus to Analysis within the undergraduate mathematics curriculum. First, we formulate this problem using the anthropological theory of the didactic, in particular the notion of praxeology, along with a possible solution related to Klein's "Plan B": here,…
Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y., E-mail: yu.feng@unsw.edu.au; Lin, S.; Huang, S.
Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation givesmore » a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.« less
NASA Astrophysics Data System (ADS)
Hütter, Markus; Svendsen, Bob
2013-11-01
An essential part in modeling out-of-equilibrium dynamics is the formulation of irreversible dynamics. In the latter, the major task consists in specifying the relations between thermodynamic forces and fluxes. In the literature, mainly two distinct approaches are used for the specification of force-flux relations. On the one hand, quasi-linear relations are employed, which are based on the physics of transport processes and fluctuation-dissipation theorems (de Groot and Mazur in Non-equilibrium thermodynamics, North Holland, Amsterdam, 1962, Lifshitz and Pitaevskii in Physical kinetics. Volume 10, Landau and Lifshitz series on theoretical physics, Pergamon Press, Oxford, 1981). On the other hand, force-flux relations are also often represented in potential form with the help of a dissipation potential (Šilhavý in The mechanics and thermodynamics of continuous media, Springer, Berlin, 1997). We address the question of how these two approaches are related. The main result of this presentation states that the class of models formulated by quasi-linear relations is larger than what can be described in a potential-based formulation. While the relation between the two methods is shown in general terms, it is demonstrated also with the help of three examples. The finding that quasi-linear force-flux relations are more general than dissipation-based ones also has ramifications for the general equation for non-equilibrium reversible-irreversible coupling (GENERIC: e.g., Grmela and Öttinger in Phys Rev E 56:6620-6632, 6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005). This framework has been formulated and used in two different forms, namely a quasi-linear (Öttinger and Grmela in Phys Rev E 56:6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005) and a dissipation potential-based (Grmela in Adv Chem Eng 39:75-129, 2010, Grmela in J Non-Newton Fluid Mech 165:980-986, 2010, Mielke in Continuum Mech Therm 23:233-256, 2011) form, respectively, relating the irreversible evolution to the entropy gradient. It is found that also in the case of GENERIC, the quasi-linear representation encompasses a wider class of phenomena as compared to the dissipation-based formulation. Furthermore, it is found that a potential exists for the irreversible part of the GENERIC if and only if one does for the underlying force-flux relations.
SEACAS Theory Manuals: Part III. Finite Element Analysis in Nonlinear Solid Mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laursen, T.A.; Attaway, S.W.; Zadoks, R.I.
1999-03-01
This report outlines the application of finite element methodology to large deformation solid mechanics problems, detailing also some of the key technological issues that effective finite element formulations must address. The presentation is organized into three major portions: first, a discussion of finite element discretization from the global point of view, emphasizing the relationship between a virtual work principle and the associated fully discrete system, second, a discussion of finite element technology, emphasizing the important theoretical and practical features associated with an individual finite element; and third, detailed description of specific elements that enjoy widespread use, providing some examples ofmore » the theoretical ideas already described. Descriptions of problem formulation in nonlinear solid mechanics, nonlinear continuum mechanics, and constitutive modeling are given in three companion reports.« less
Three-dimensional waveform sensitivity kernels
NASA Astrophysics Data System (ADS)
Marquering, Henk; Nolet, Guust; Dahlen, F. A.
1998-03-01
The sensitivity of intermediate-period (~10-100s) seismic waveforms to the lateral heterogeneity of the Earth is computed using an efficient technique based upon surface-wave mode coupling. This formulation yields a general, fully fledged 3-D relationship between data and model without imposing smoothness constraints on the lateral heterogeneity. The calculations are based upon the Born approximation, which yields a linear relation between data and model. The linear relation ensures fast forward calculations and makes the formulation suitable for inversion schemes; however, higher-order effects such as wave-front healing are neglected. By including up to 20 surface-wave modes, we obtain Fréchet, or sensitivity, kernels for waveforms in the time frame that starts at the S arrival and which includes direct and surface-reflected body waves. These 3-D sensitivity kernels provide new insights into seismic-wave propagation, and suggest that there may be stringent limitations on the validity of ray-theoretical interpretations. Even recently developed 2-D formulations, which ignore structure out of the source-receiver plane, differ substantially from our 3-D treatment. We infer that smoothness constraints on heterogeneity, required to justify the use of ray techniques, are unlikely to hold in realistic earth models. This puts the use of ray-theoretical techniques into question for the interpretation of intermediate-period seismic data. The computed 3-D sensitivity kernels display a number of phenomena that are counter-intuitive from a ray-geometrical point of view: (1) body waves exhibit significant sensitivity to structure up to 500km away from the source-receiver minor arc; (2) significant near-surface sensitivity above the two turning points of the SS wave is observed; (3) the later part of the SS wave packet is most sensitive to structure away from the source-receiver path; (4) the sensitivity of the higher-frequency part of the fundamental surface-wave mode is wider than for its faster, lower-frequency part; (5) delayed body waves may considerably influence fundamental Rayleigh and Love waveforms. The strong sensitivity of waveforms to crustal structure due to fundamental-mode-to-body-wave scattering precludes the use of phase-velocity filters to model body-wave arrivals. Results from the 3-D formulation suggest that the use of 2-D and 1-D techniques for the interpretation of intermediate-period waveforms should seriously be reconsidered.
Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks
NASA Astrophysics Data System (ADS)
Huang, Lei; Pan, Feng
2007-09-01
In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.
NASA Astrophysics Data System (ADS)
Rezazadeh, Ghader; Keyvani, Aliasghar; Sadeghi, Morteza H.; Bahrami, Manouchehr
2013-06-01
Effects of Ohmic resistance on MEMS/NEMS vibrating structures that have always been dismissed in some situations may cause important changes in resonance properties and impedance parameters of the MEMS/NEMS based circuits. In this paper it is aimed to present a theoretical model to precisely investigate the problem on a simple cantilever-substrate resonator. In this favor the Ohm's current law and charge conservation law have been merged to find a differential Equation for voltage propagation on the beam and because mostly nano structures are expected as the scope of the problem, modified couple stress theory is used to formulate the dynamic motion of the beam. The two governing equations were coupled and both nonlinear that have been solved simultaneously using a Galerkin based state space formulation. The obtained results that are in exact agreement with previous works show that dynamic pull-in voltage, switching time, and impedance of structure as a MEMS capacitor especially in frequencies higher than natural resonance frequency strongly relay on electrical resistance of the beam and substrate material.
Developing a Theoretical Framework for Classifying Levels of Context Use for Mathematical Problems
ERIC Educational Resources Information Center
Almuna Salgado, Felipe
2016-01-01
This paper aims to revisit and clarify the term problem context and to develop a theoretical classification of the construct of levels of context use (LCU) to analyse how the context of a problem is used to formulate a problem in mathematical terms and to interpret the answer in relation to the context of a given problem. Two criteria and six…
REVIEWS OF TOPICAL PROBLEMS: Radio pulsars
NASA Astrophysics Data System (ADS)
Beskin, Vasilii S.
1999-11-01
Recent theoretical work concerning the magnetosphere of and radio emission from pulsars is reviewed in detail. Taking into account years of little or no cooperation between theory and observation and noting, in particular, that no systematic observations are in fact being made to check theoretical predictions, the key ideas underlying the theory of the pulsar magnetosphere are formulated and new observations aimed at verifying current models are discussed.
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general formulation for the analysis of steady and unsteady, subsonic and supersonic potential aerodynamics for arbitrary complex geometries is presented. The theoretical formulation, the numerical procedure, and numerical results are included. In particular, generalized forces for fully unsteady (complex frequency) aerodynamics for an AGARD coplanar wing-tail interfering configuration in both subsonic and supersonic flows are considered.
ADE-FDTD Scattered-Field Formulation for Dispersive Materials
Kong, Soon-Cheol; Simpson, Jamesina J.; Backman, Vadim
2009-01-01
This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems. PMID:19844602
ADE-FDTD Scattered-Field Formulation for Dispersive Materials.
Kong, Soon-Cheol; Simpson, Jamesina J; Backman, Vadim
2008-01-01
This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2016-02-01
We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
Indicators of economic security of the region: a risk-based approach to assessing and rating
NASA Astrophysics Data System (ADS)
Karanina, Elena; Loginov, Dmitri
2017-10-01
The article presents the results of research of theoretical and methodical problems of strategy development for economic security of a particular region, justified by the composition of risk factors. The analysis of those risk factors is performed. The threshold values of indicators of economic security of regions were determined using the methods of socioeconomic statistics. The authors concluded that in modern Russian conditions it is necessary to pay great attention to the analysis of the composition and level of indicators of economic security of the region and, based on the materials of this analysis, to formulate more accurate decisions concerning the strategy of socio-economic development.
Beam-splitter switches based on zenithal bistable liquid-crystal gratings.
Zografopoulos, Dimitrios C; Beccherelli, Romeo; Kriezis, Emmanouil E
2014-10-01
The tunable optical diffractive properties of zenithal bistable nematic liquid-crystal gratings are theoretically investigated. The liquid-crystal orientation is rigorously solved via a tensorial formulation of the Landau-de Gennes theory and the optical transmission properties of the gratings are investigated via full-wave finite-element frequency-domain simulations. It is demonstrated that by proper design the two stable states of the grating can provide nondiffracting and diffracting operation, the latter with equal power splitting among different diffraction orders. An electro-optic switching mechanism, based on dual-frequency nematic materials, and its temporal dynamics are further discussed. Such gratings provide a solution towards tunable beam-steering and beam-splitting components with extremely low power consumption.
Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul
2014-01-01
We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795
Donepezil dosing strategies: pharmacokinetic considerations.
Gomolin, Irving H; Smith, Candace; Jeitner, Thomas M
2011-10-01
Donepezil (Aricept) is a cholinesterase inhibitor approved for the treatment of Alzheimer's disease. Immediate release formulations of 5- and 10-mg tablets were approved by the Food and Drug Administration in the United States in 1996. In July 2010, the Food and Drug Administration approved a 23-mg sustained release (SR) formulation. The SR formulation may provide additional benefit to patients receiving 10 mg daily but the incidence of adverse reactions is increased. We derived plasma concentration profiles for higher dose immediate-release formulations (15 mg once daily, 10 mg twice daily, and 20 mg once daily) and for the profile anticipated to result from the 23-mg SR formulation. Our model predicts similar steady-state concentration profiles for 10 mg twice daily, 20 mg once daily, and 23 mg SR once daily. This provides the theoretical basis for incremental immediate release dose escalation to minimize the emergence of adverse reactions and the potential to offer a cost-effective alternative to the SR formulation with currently approved generic immediate release formulations. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Verification of Gyrokinetic codes: Theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia; Bottino, Alberto; Görler, Tobias; Sonnendrücker, Eric; Told, Daniel; Villard, Laurent
2017-05-01
In fusion plasmas, the strong magnetic field allows the fast gyro-motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the subsequent transport. Naturally, these codes require thorough verification and validation. Here, we present a new and generic theoretical framework and specific numerical applications to test the faithfulness of the implemented models to theory and to verify the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which has rarely been done and therefore makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The verification of the numerical scheme is proposed via the benchmark effort. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC) and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations implemented in the ORB5 and GENE codes using the Lagrangian variational formulation. At the computational level, detailed verifications of global electromagnetic test cases developed from the CYCLONE Base Case are considered, including a parametric β-scan covering the transition from ITG to KBM and the spectral properties at the nominal β value.
The Uses and Abuses of the Acoustic Analogy in Helicopter Rotor Noise Prediction
NASA Technical Reports Server (NTRS)
Farassat, F.; Brentner, Kenneth S.
1987-01-01
This paper is theoretical in nature and addresses applications of the acoustic analogy in helicopter rotor noise prediction. It is argued that in many instances the acoustic analogy has not been used with care in rotor noise studies. By this it is meant that approximate or inappropriate formulations have been used. By considering various mechanisms of noise generation, such abuses are identified and the remedy is suggested. The mechanisms discussed are thickness, loading, quadrupole, and blade-vortex interaction noise. The quadrupole term of the Ffowcs Williams-Hawkings equation is written in a new form which separates the contributions of regions of high gradients such as shock surfaces. It is shown by order of magnitude studies that such regions are capable of producing noise with the same directivity as the thickness noise. The inclusion of this part of quadrupole sources in current acoustic codes is quite practical. Some of the difficulties with the use of loading noise formulations of the first author in predictions of blade-vortex interaction noise are discussed. It appears that there is a need for development of new theoretical results based on the acoustic analogy in this area. Because of the impulsive character of the blade surface pressure, a time scale of integration different from that used in loading and thickness computations must he used in a computer code for prediction of blade-vortex interaction noise.
Mirkin, B M; Naumova, L G
2015-01-01
L.G. Ramensky (1884-1953) was an outstanding Soviet geobotanist of the first part of XX century. Considered is his theoretical legacy and its contribution to modern vegetation science. L.G. Ramensky formulated the principle of vegetation continuum based on which the modern paradigm of vegetation science has been put into shape. The scientist made a contribution to the development of such important theoretical conceptions as types of plant strategy, coenosis and coenobiosis (coexistence of species), patterns of interannual variability in plant communities, ecological successions. The unique ecological scales were established by L.G. Ramensky that characterize the distribution of 1400 species over the gradients of soil moistening, richness, and salinization as well as moistening variability, pastoral digression, and alluvial intensity. He came out against mechanistic notions by V.N. Sukachev on a biogeocoenosis structure. The scientist did not offer his own method of plant communities classification but his well-reasoned criticism of dominant classification played a great role in adoption of floristical classification principles (Braun-Blanquet approach) by phytocenology in our country.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
When Interpolation-Induced Reflection Artifact Meets Time-Frequency Analysis.
Lin, Yu-Ting; Flandrin, Patrick; Wu, Hau-Tieng
2016-10-01
While extracting the temporal dynamical features based on the time-frequency analyses, like the reassignment and synchrosqueezing transform, attracts more and more interest in biomedical data analysis, we should be careful about artifacts generated by interpolation schemes, in particular when the sampling rate is not significantly higher than the frequency of the oscillatory component we are interested in. We formulate the problem called the reflection effect and provide a theoretical justification of the statement. We also show examples in the anesthetic depth analysis with clear but undesirable artifacts. The artifact associated with the reflection effect exists not only theoretically but practically as well. Its influence is pronounced when we apply the time-frequency analyses to extract the time-varying dynamics hidden inside the signal. We have to carefully deal with the artifact associated with the reflection effect by choosing a proper interpolation scheme.
Advances in cognitive theory and therapy: the generic cognitive model.
Beck, Aaron T; Haigh, Emily A P
2014-01-01
For over 50 years, Beck's cognitive model has provided an evidence-based way to conceptualize and treat psychological disorders. The generic cognitive model represents a set of common principles that can be applied across the spectrum of psychological disorders. The updated theoretical model provides a framework for addressing significant questions regarding the phenomenology of disorders not explained in previous iterations of the original model. New additions to the theory include continuity of adaptive and maladaptive function, dual information processing, energizing of schemas, and attentional focus. The model includes a theory of modes, an organization of schemas relevant to expectancies, self-evaluations, rules, and memories. A description of the new theoretical model is followed by a presentation of the corresponding applied model, which provides a template for conceptualizing a specific disorder and formulating a case. The focus on beliefs differentiates disorders and provides a target for treatment. A variety of interventions are described.
Final Technical Report for "Reducing tropical precipitation biases in CESM"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less
Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry
Keiflin, Ronald; Janak, Patricia H.
2015-01-01
Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275
The Stack of Yang-Mills Fields on Lorentzian Manifolds
NASA Astrophysics Data System (ADS)
Benini, Marco; Schenkel, Alexander; Schreiber, Urs
2018-03-01
We provide an abstract definition and an explicit construction of the stack of non-Abelian Yang-Mills fields on globally hyperbolic Lorentzian manifolds. We also formulate a stacky version of the Yang-Mills Cauchy problem and show that its well-posedness is equivalent to a whole family of parametrized PDE problems. Our work is based on the homotopy theoretical approach to stacks proposed in Hollander (Isr. J. Math. 163:93-124, 2008), which we shall extend by further constructions that are relevant for our purposes. In particular, we will clarify the concretification of mapping stacks to classifying stacks such as BG con.
NASA Technical Reports Server (NTRS)
Mosher, R. A.; Palusinski, O. A.; Bier, M.
1982-01-01
A mathematical model has been developed which describes the steady state in an isoelectric focusing (IEF) system with ampholytes or monovalent buffers. The model is based on the fundamental equations describing the component dissociation equilibria, mass transport due to diffusion and electromigration, electroneutrality, and the conservation of charge. The validity and usefulness of the model has been confirmed by using it to formulate buffer systems in actual laboratory experiments. The model has been recently extended to include the evolution of transient states not only in IEF but also in other modes of electrophoresis.
Fractional dynamics using an ensemble of classical trajectories
NASA Astrophysics Data System (ADS)
Sun, Zhaopeng; Dong, Hao; Zheng, Yujun
2018-01-01
A trajectory-based formulation for fractional dynamics is presented and the trajectories are generated deterministically. In this theoretical framework, we derive a new class of estimators in terms of confluent hypergeometric function (F11) to represent the Riesz fractional derivative. Using this method, the simulation of free and confined Lévy flight are in excellent agreement with the exact numerical and analytical results. In addition, the barrier crossing in a bistable potential driven by Lévy noise of index α is investigated. In phase space, the behavior of trajectories reveal the feature of Lévy flight in a better perspective.
The Nuclear Energy Density Functional Formalism
NASA Astrophysics Data System (ADS)
Duguet, T.
The present document focuses on the theoretical foundations of the nuclear energy density functional (EDF) method. As such, it does not aim at reviewing the status of the field, at covering all possible ramifications of the approach or at presenting recent achievements and applications. The objective is to provide a modern account of the nuclear EDF formalism that is at variance with traditional presentations that rely, at one point or another, on a Hamiltonian-based picture. The latter is not general enough to encompass what the nuclear EDF method represents as of today. Specifically, the traditional Hamiltonian-based picture does not allow one to grasp the difficulties associated with the fact that currently available parametrizations of the energy kernel E[g',g] at play in the method do not derive from a genuine Hamilton operator, would the latter be effective. The method is formulated from the outset through the most general multi-reference, i.e. beyond mean-field, implementation such that the single-reference, i.e. "mean-field", derives as a particular case. As such, a key point of the presentation provided here is to demonstrate that the multi-reference EDF method can indeed be formulated in a mathematically meaningful fashion even if E[g',g] does not derive from a genuine Hamilton operator. In particular, the restoration of symmetries can be entirely formulated without making any reference to a projected state, i.e. within a genuine EDF framework. However, and as is illustrated in the present document, a mathematically meaningful formulation does not guarantee that the formalism is sound from a physical standpoint. The price at which the latter can be enforced as well in the future is eventually alluded to.
Formulation of a stable parenteral product; Clonidine Hydrochloride Injection.
Kostecka, D; Duncan, M R; Wagenknecht, D
1998-01-01
Clonidine Hydrochloride Injection (Duraclon) is a clear, colorless, preservative-free, pyrogen free, aqueous solution of clonidine hydrochloride. The indication for this product is for use as an adjunct in pain management, administered epidurally, when opiates are insufficient. The drug formulation was evaluated under both normal and stress conditions in the preformulation/formulation studies. The list of studies conducted includes a light sensitivity study, an oxygen sensitivity study, a pH/stability study, a stopper compatibility evaluation, a freeze-thaw study, and a stability study. Samples from the light, oxygen, pH/stability, and stability studies were evaluated for color, visual clarity, pH, potency, and chromatographic purity. Samples from the freeze-thaw study were evaluated for all of the above except chromatographic purity. The results for these studies demonstrate the stability of the product as formulated. The pH of this unbuffered product was consistently within the acceptance criteria. The product remained clear and colorless for the duration of each study. The values obtained for the potency and chromatographic purity assays showed no evidence of degradation. The reasons for the lack of degradation can be found in the molecular structure of the drug substance and the formulation of the drug product. Since the molecular structure is that of a Schiff base, it is theoretically possible, although difficult, to cleave the molecule. A catalyst would be required, and none of the possible catalysts are present in the formulation. The molecule could also be cleaved upon exposure to light, and the evidence indicates that the molecule does interact with light. This interaction is not to the degree, however, that product stability is affected. The formulation contains only the active drug substance and sodium chloride in water for injection with a pH of approximately 6. Although the product is unbuffered, the influence of the stoppers and glass vials upon the formulation pH was minimal. In addition, the stopper compatibility of the product is enhanced by the absence of chelating agents, preservatives, acids, and bases. Since the dilute concentrations of both the active and excipient are well below their solubility limits, no solubility related issues would be expected upon freezing and subsequent thawing. Clonidine Hydrochloride Injection, as formulated, does not require protection from light, oxygen, or freezing. The product shows acceptable stability within the pH range, and the rubber closure is compatible with the product. Real time stability data combined with statistical projections support a 36-month expiration date.
Bell, A J; Heath, M D; Hewings, S J; Skinner, M A
2015-11-01
Infectious disease vaccine potency is affected by antigen adjuvant adsorption. WHO and EMA guidelines recommend limits and experimental monitoring of adsorption in vaccines and allergy immunotherapies. Adsorbed allergoids and MPL® in MATA-MPL allergy immunotherapy formulations effectively treat IgE mitigated allergy. Understanding vaccine antigen adjuvant adsorption allows optimisation of potency and should be seen as good practice; however current understanding is seldom applied to allergy immunotherapies. The allergoid and MPL® adsorption to MCT in MATA-MPL allergy immunotherapy formulations was experimental determination using specific allergen IgE allerginicity and MPL® content methods. Binding forces between MPL® and MCT were investigated by competition binding experiments. MATA-MPL samples with different allergoids gave results within 100-104% of the theoretical 50μg/mL MPL® content. Unmodified drug substance samples showed significant desirable IgE antigenicity, 1040-170 QAU/mL. MATA-MPL supernatant samples with different allergoids gave results of ≤2 μg/mL MPL® and ≤0.1-1.4 QAU/mL IgE antigenicity, demonstrating approximately ≥96 & 99% adsorption respectively. Allergoid and MPL® adsorption in different MATA-MPL allergy immunotherapy formulations is consistent and meets guideline recommendations. MCT formulations treated to disrupt electrostatic, hydrophobic and ligand exchange interactions, gave an MPL® content of ≤2 μg/mL in supernatant samples. MCT formulations treated to disrupt aromatic interactions, gave an MPL® content of 73-92 μg/mL in supernatant samples. MPL® adsorption to l-tyrosine in MCT formulations is based on interactions between the 2-deoxy-2-aminoglucose backbone on MPL® and aromatic ring of l-tyrosine in MCT, such as C-H⋯π interaction. MCT could be an alternative adjuvant depot for some infectious disease antigens. Copyright © 2015. Published by Elsevier Inc.
Skau, Jutta K H; Bunthang, Touch; Chamnan, Chhoun; Wieringa, Frank T; Dijkhuizen, Marjoleine A; Roos, Nanna; Ferguson, Elaine L
2014-01-01
A new software tool, Optifood, developed by the WHO and based on linear programming (LP) analysis, has been developed to formulate food-based recommendations. This study discusses the use of Optifood for predicting whether formulated complementary food (CF) products can ensure dietary adequacy for target populations in Cambodia. Dietary data were collected by 24-h recall in a cross-sectional survey of 6- to 11-mo-old infants (n = 78). LP model parameters were derived from these data, including a list of foods, median serving sizes, and dietary patterns. Five series of LP analyses were carried out to model the target population's baseline diet and 4 formulated CF products [WinFood (WF), WinFood-Lite (WF-L), Corn-Soy-Blend Plus (CSB+), and Corn-Soy-Blend Plus Plus (CSB++)], which were added to the diet in portions of 33 g/d dry weight (DW) for infants aged 6-8 mo and 40 g/d DW for infants aged 9-11 mo. In each series of analyses, the nutritionally optimal diet and theoretical range, in diet nutrient contents, were determined. The LP analysis showed that baseline diets could not achieve the Recommended Nutrient Intake (RNI) for thiamin, riboflavin, niacin, folate, vitamin B-12, calcium, iron, and zinc (range: 14-91% of RNI in the optimal diets) and that none of the formulated CF products could cover the nutrient gaps for thiamin, niacin, iron, and folate (range: 22-86% of the RNI). Iron was the key limiting nutrient, for all modeled diets, achieving a maximum of only 48% of the RNI when CSB++ was included in the diet. Only WF and WF-L filled the nutrient gap for calcium. WF-L, CSB+, and CSB++ filled the nutrient gap for zinc (9- to 11-mo-olds). The formulated CF products improved the nutrient adequacy of complementary feeding diets but could not entirely cover the nutrient gaps. These results emphasize the value of using LP to evaluate special CF products during the intervention planning phase. The WF study was registered at controlled-trials.com as ISRCTN19918531.
The Group Treatment of Bulimia.
ERIC Educational Resources Information Center
Weinstein, Harvey M.; Richman, Ann
1984-01-01
Bulimia has become an increasing problem in the college population. This article describes a group psychotherapeutic treatment approach to the problem. A theoretical formulation of the psychodynamics that may underlie the development of bulimia is offered. (Author/DF)
Hybrid Optimization in Urban Traffic Networks
DOT National Transportation Integrated Search
1979-04-01
The hybrid optimization problem is formulated to provide a general theoretical framework for the analysis of a class of traffic control problems which takes into account the role of individual drivers as independent decisionmakers. Different behavior...
Project : semi-autonomous parking for enhanced safety and efficiency.
DOT National Transportation Integrated Search
2016-04-01
Index coding, a coding formulation traditionally analyzed in the theoretical computer science and : information theory communities, has received considerable attention in recent years due to its value in : wireless communications and networking probl...
NASA Astrophysics Data System (ADS)
Rathod, Vishal
The objective of the present project was to develop the Ibuprofen-loaded Nanostructured Lipid Carrier (IBU-NLCs) for topical ocular delivery based on substantial pre-formulation screening of the components and understanding the interplay between the formulation and process variables. The BCS Class II drug: Ibuprofen was selected as the model drug for the current study. IBU-NLCs were prepared by melt emulsification and ultrasonication technique. Extensive pre-formulation studies were performed to screen the lipid components (solid and liquid) based on drug's solubility and affinity as well as components compatibility. The results from DSC & XRD assisted in selecting the most suitable ratio to be utilized for future studies. DynasanRTM 114 was selected as the solid lipid & MiglyolRTM 840 was selected as the liquid lipid based on preliminary lipid screening. The ratio of 6:4 was predicted to be the best based on its crystallinity index and the thermal events. As there are many variables involved for further optimization of the formulation, a single design approach is not always adequate. A hybrid-design approach was applied by employing the Plackett Burman design (PBD) for preliminary screening of 7 critical variables, followed by Box-Behnken design (BBD), a sub-type of response surface methodology (RSM) design using 2 relatively significant variables from the former design and incorporating Surfactant/Co-surfactant ratio as the third variable. Comparatively, KolliphorRTM HS15 demonstrated lower Mean Particle Size (PS) & Polydispersity Index (PDI) and KolliphorRTM P188 resulted in Zeta Potential (ZP) < -20 mV during the surfactant screening & stability studies. Hence, Surfactant/Cosurfactant ratio was employed as the third variable to understand its synergistic effect on the response variables. We selected PS, PDI, and ZP as critical response variables in the PBD since they significantly influence the stability & performance of NLCs. Formulations prepared using BBD were further characterized and evaluated concerning PS, PDI, ZP and Entrapment Efficiency (EE) to identify the multi-factor interactions between selected formulation variables. In vitro release studies were performed using Spectra/por dialysis membrane on Franz diffusion cell and Phosphate Saline buffer (7.4 pH) as the medium. Samples for assay, EE, Loading Capacity (LC), Solubility studies & in-vitro release were filtered using Amicon 50K and analyzed via UPLC system (Waters) at a detection wavelength of 220 nm. Significant variables were selected through PBD, and the third variable was incorporated based on surfactant screening & stability studies for the next design. Assay of the BBD based formulations was found to be within 95-104% of the theoretically calculated values. Further studies were investigated for PS, PDI, ZP & EE. PS was found to be in the range of 103-194 nm with PDI ranging from 0.118 to 0.265. The ZP and EE were observed to be in the range of -22.2 to -11 mV & 90 to 98.7 % respectively. Drug release of 30% was observed from the optimized formulation in the first 6 hr of in-vitro studies, and the drug release showed a sustained release of ibuprofen thereafter over several hours. These values also confirm that the production method, and all other selected variables, effectively promoted the incorporation of ibuprofen in NLC. Quality by Design (QbD) approach was successfully implemented in developing a robust ophthalmic formulation with superior physicochemical and morphometric properties. NLCs as the nanocarrier demonstrated promising perspective for topical delivery of poorly water-soluble drugs.
A new theoretical framework for modeling respiratory protection based on the beta distribution.
Klausner, Ziv; Fattal, Eyal
2014-08-01
The problem of modeling respiratory protection is well known and has been dealt with extensively in the literature. Often the efficiency of respiratory protection is quantified in terms of penetration, defined as the proportion of an ambient contaminant concentration that penetrates the respiratory protection equipment. Typically, the penetration modeling framework in the literature is based on the assumption that penetration measurements follow the lognormal distribution. However, the analysis in this study leads to the conclusion that the lognormal assumption is not always valid, making it less adequate for analyzing respiratory protection measurements. This work presents a formulation of the problem from first principles, leading to a stochastic differential equation whose solution is the probability density function of the beta distribution. The data of respiratory protection experiments were reexamined, and indeed the beta distribution was found to provide the data a better fit than the lognormal. We conclude with a suggestion for a new theoretical framework for modeling respiratory protection. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA Astrophysics Data System (ADS)
Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo
2018-05-01
An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
NASA Astrophysics Data System (ADS)
Maurya, R. C.; Malik, B. A.; Mir, J. M.; Vishwakarma, P. K.; Rajak, D. K.; Jain, N.
2015-11-01
The present report pertains to synthesis and combined experimental-DFT studies of a series of four novel mixed-ligand complexes of cobalt(II) of the general composition [Co(dha)(L)(H2O)2], where dhaH = dehydroacetic acid, LH = β-ketoenolates viz., o-acetoacetotoluidide (o-aatdH), o-acetoacetanisidide (o-aansH), acetylacetone (acacH) or 1-benzoylacetone (1-bac). The resulting complexes were formulated based on elemental analysis, molar conductance, magnetic measurements, mass spectrometric, IR, electronic, electron spin resonance and cyclic voltammetric studies. The TGA based thermal behavior of one representative complex was evaluated. Molecular geometry optimizations and vibrational frequency calculations have been performed with Gaussian 09 software package by using density functional theory (DFT) methods with B3LYP/LANL2MB combination for dhaH and one of its complexes, [Co(dha)(1-bac)(H2O)2]. Theoretical data has been found in an excellent agreement with the experimental results. Based on experimental and theoretical data, suitable trans-octahedral structure has been proposed for the present class of complexes. Moreover, the complexes also showed a satisfactory antibacterial activity.
NASA Technical Reports Server (NTRS)
Wilhelm, H. E.
1974-01-01
An analysis of the sputtering of metal surfaces and grids by ions of medium energies is given and it is shown that an exact, nonlinear, hyperbolic wave equation for the temperature field describes the transient transport of heat in metals. Quantum statistical and perturbation theoretical analysis of surface sputtering by low energy ions are used to develop the same expression for the sputtering rate. A transport model is formulated for the deposition of sputtered atoms on system components. Theoretical efforts in determining the potential distribution and the particle velocity distributions in low pressure discharges are briefly discussed.
Vibrational relaxation in hypersonic flow fields
NASA Technical Reports Server (NTRS)
Meador, Willard E.; Miner, Gilda A.; Heinbockel, John H.
1993-01-01
Mathematical formulations of vibrational relaxation are derived from first principles for application to fluid dynamic computations of hypersonic flow fields. Relaxation within and immediately behind shock waves is shown to be substantially faster than that described in current numerical codes. The result should be a significant reduction in nonequilibrium radiation overshoot in shock layers and in radiative heating of hypersonic vehicles; these results are precisely the trends needed to bring theoretical predictions more in line with flight data. Errors in existing formulations are identified and qualitative comparisons are made.
A review of depolarization modeling for earth-space radio paths at frequencies above 10 GHz
NASA Technical Reports Server (NTRS)
Bostian, C. W.; Stutzman, W. L.; Gaines, J. M.
1982-01-01
A review is presented of models for the depolarization, caused by scattering from raindrops and ice crystals, that limits the performance of dual-polarized satellite communication systems at frequencies above 10 GHz. The physical mechanisms of depolarization as well as theoretical formulations and empirical data are examined. Three theoretical models, the transmission, attenuation-derived, and scaling models, are described and their relative merits are considered.
A simple analytical model for dynamics of time-varying target leverage ratios
NASA Astrophysics Data System (ADS)
Lo, C. F.; Hui, C. H.
2012-03-01
In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.
Duret, Christophe; Wauthoz, Nathalie; Sebti, Thami; Vanderbist, Francis; Amighi, Karim
2012-01-01
Purpose Itraconazole (ITZ) dry powders for inhalation (DPI) composed of nanoparticles (NP) embedded in carrier microparticles were prepared and characterized. Methods DPIs were initially produced by reducing the ITZ particle size to the nanometer range using high-pressure homogenization with tocopherol polyethylene 1000 succinate (TPGS, 10% w/w ITZ) as a stabilizer. The optimized nanosuspension and the initial microsuspension were then spray-dried with different proportions of or in the absence of mannitol and/or sodium taurocholate. DPI characterization was performed using scanning electron microscopy for morphology, laser diffraction to evaluate the size-reduction process, and the size of the dried NP when reconstituted in aqueous media, impaction studies using a multistage liquid impactor to determine the aerodynamic performance and fine-particle fraction that is theoretically able to reach the lung, and dissolution studies to determine the solubility of ITZ. Results Scanning electron microscopy micrographs showed that the DPI particles were composed of mannitol microparticles with embedded nano- or micro-ITZ crystals. The formulations prepared from the nanosuspension exhibited good flow properties and better fine-particle fractions, ranging from 46.2% ± 0.5% to 63.2% ± 1.7% compared to the 23.1% ± 0.3% that was observed with the formulation produced from the initial microsuspension. Spray-drying affected the NP size by inducing irreversible aggregation, which was able to be minimized by the addition of mannitol and sodium taurocholate before the drying procedure. The ITZ NP-based DPI considerably increased the ITZ solubility (58 ± 2 increased to 96 ± 1 ng/mL) compared with that of raw ITZ or an ITZ microparticle-based DPI (<10 ng/mL). Conclusion Embedding ITZ NP in inhalable microparticles is a very effective method to produce DPI formulations with optimal aerodynamic properties and enhanced ITZ solubility. These formulations could be applied to other poorly water-soluble drugs and could be a very effective alternative for treating invasive pulmonary aspergillosis. PMID:23093903
Yan, Fei; Christmas, William; Kittler, Josef
2008-10-01
In this paper, we propose a multilayered data association scheme with graph-theoretic formulation for tracking multiple objects that undergo switching dynamics in clutter. The proposed scheme takes as input object candidates detected in each frame. At the object candidate level, "tracklets'' are "grown'' from sets of candidates that have high probabilities of containing only true positives. At the tracklet level, a directed and weighted graph is constructed, where each node is a tracklet, and the edge weight between two nodes is defined according to the "compatibility'' of the two tracklets. The association problem is then formulated as an all-pairs shortest path (APSP) problem in this graph. Finally, at the path level, by analyzing the APSPs, all object trajectories are identified, and track initiation and track termination are automatically dealt with. By exploiting a special topological property of the graph, we have also developed a more efficient APSP algorithm than the general-purpose ones. The proposed data association scheme is applied to tennis sequences to track tennis balls. Experiments show that it works well on sequences where other data association methods perform poorly or fail completely.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
Synthetic thrombus model for in vitro studies of laser thrombolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermes, R.E.; Trajkovska, K.
1998-07-01
Laser thrombolysis is the controlled ablation of a thrombus (blood clot) blockage in a living arterial system. Theoretical modeling of the interaction of laser light with thrombi relies on the ability to perform in vitro experiments with well characterized surrogate materials. A synthetic thrombus formulation may offer more accurate results when compared to in vivo clinical experiments. The authors describe the development of new surrogate materials based on formulations incorporating chick egg, guar gum, modified food starch, and a laser light absorbing dye. The sound speed and physical consistency of the materials were very close to porcine (arterial) and humanmore » (venous) thrombi. Photographic and videotape recordings of pulsed dye laser ablation experiments under various experimental conditions were used to evaluate the new material as compared to in vitro tests with human (venous) thrombus. The characteristics of ablation and mass removal were similar to that of real thrombi, and therefore provide a more realistic model for in vitro laser thrombolysis when compared to gelatin.« less
Sadhukhan, Banasree; Singh, Prashant; Nayak, Arabinda; ...
2017-08-09
We present a real-space formulation for calculating the electronic structure and optical conductivity of random alloys based on Kubo-Greenwood formalism interfaced with augmented space recursion technique formulated with the tight-binding linear muffin-tin orbital basis with the van Leeuwen–Baerends corrected exchange potential. This approach has been used to quantitatively analyze the effect of chemical disorder on the configuration averaged electronic properties and optical response of two-dimensional honeycomb siliphene Si xC 1–x beyond the usual Dirac-cone approximation. We predicted the quantitative effect of disorder on both the electronic structure and optical response over a wide energy range, and the results are discussedmore » in the light of the available experimental and other theoretical data. As a result, our proposed formalism may open up a facile way for planned band-gap engineering in optoelectronic applications.« less
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.
2017-09-01
Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.
A model-adaptivity method for the solution of Lennard-Jones based adhesive contact problems
NASA Astrophysics Data System (ADS)
Ben Dhia, Hachmi; Du, Shuimiao
2018-05-01
The surface micro-interaction model of Lennard-Jones (LJ) is used for adhesive contact problems (ACP). To address theoretical and numerical pitfalls of this model, a sequence of partitions of contact models is adaptively constructed to both extend and approximate the LJ model. It is formed by a combination of the LJ model with a sequence of shifted-Signorini (or, alternatively, -Linearized-LJ) models, indexed by a shift parameter field. For each model of this sequence, a weak formulation of the associated local ACP is developed. To track critical localized adhesive areas, a two-step strategy is developed: firstly, a macroscopic frictionless (as first approach) linear-elastic contact problem is solved once to detect contact separation zones. Secondly, at each shift-adaptive iteration, a micro-macro ACP is re-formulated and solved within the multiscale Arlequin framework, with significant reduction of computational costs. Comparison of our results with available analytical and numerical solutions shows the effectiveness of our global strategy.
A hybrid model for river water temperature as a function of air temperature and discharge
NASA Astrophysics Data System (ADS)
Toffolon, Marco; Piccolroaz, Sebastiano
2015-11-01
Water temperature controls many biochemical and ecological processes in rivers, and theoretically depends on multiple factors. Here we formulate a model to predict daily averaged river water temperature as a function of air temperature and discharge, with the latter variable being more relevant in some specific cases (e.g., snowmelt-fed rivers, rivers impacted by hydropower production). The model uses a hybrid formulation characterized by a physically based structure associated with a stochastic calibration of the parameters. The interpretation of the parameter values allows for better understanding of river thermal dynamics and the identification of the most relevant factors affecting it. The satisfactory agreement of different versions of the model with measurements in three different rivers (root mean square error smaller than 1oC, at a daily timescale) suggests that the proposed model can represent a useful tool to synthetically describe medium- and long-term behavior, and capture the changes induced by varying external conditions.
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadhukhan, Banasree; Singh, Prashant; Nayak, Arabinda
We present a real-space formulation for calculating the electronic structure and optical conductivity of random alloys based on Kubo-Greenwood formalism interfaced with augmented space recursion technique formulated with the tight-binding linear muffin-tin orbital basis with the van Leeuwen–Baerends corrected exchange potential. This approach has been used to quantitatively analyze the effect of chemical disorder on the configuration averaged electronic properties and optical response of two-dimensional honeycomb siliphene Si xC 1–x beyond the usual Dirac-cone approximation. We predicted the quantitative effect of disorder on both the electronic structure and optical response over a wide energy range, and the results are discussedmore » in the light of the available experimental and other theoretical data. As a result, our proposed formalism may open up a facile way for planned band-gap engineering in optoelectronic applications.« less
NASA Astrophysics Data System (ADS)
Latella, Ivan; Ben-Abdallah, Philippe; Biehs, Svend-Age; Antezza, Mauro; Messina, Riccardo
2017-05-01
A general theory of photon-mediated energy and momentum transfer in N -body planar systems out of thermal equilibrium is introduced. It is based on the combination of the scattering theory and the fluctuational-electrodynamics approach in many-body systems. By making a Landauer-like formulation of the heat transfer problem, explicit formulas for the energy transmission coefficients between two distinct slabs as well as the self-coupling coefficients are derived and expressed in terms of the reflection and transmission coefficients of the single bodies. We also show how to calculate local equilibrium temperatures in such systems. An analogous formulation is introduced to quantify momentum transfer coefficients describing Casimir-Lifshitz forces out of thermal equilibrium. Forces at thermal equilibrium are readily obtained as a particular case. As an illustration of this general theoretical framework, we show on three-body systems how the presence of a fourth slab can impact equilibrium temperatures in heat-transfer problems and equilibrium positions resulting from the forces acting on the system.
Reolon, Luciano Antonio; Amaral-Machado, Lucas; Gremião, Maria Palmira Daflon; Guterres, Silvia S.
2018-01-01
Melanoma is the most aggressive and lethal type of skin cancer, with a poor prognosis because of the potential for metastatic spread. The aim was to develop innovative powder formulations for the treatment of metastatic melanoma based on micro- and nanocarriers containing 5-fluorouracil (5FU) for pulmonary administration, aiming at local and systemic action. Therefore, two innovative inhalable powder formulations were produced by spray-drying using chondroitin sulfate as a structuring polymer: (a) 5FU nanoparticles obtained by piezoelectric atomization (5FU-NS) and (b) 5FU microparticles of the mucoadhesive agent Methocel™ F4M for sustained release produced by conventional spray drying (5FU-MS). The physicochemical and aerodynamic were evaluated in vitro for both systems, proving to be attractive for pulmonary delivery. The theoretical aerodynamic diameters obtained were 0.322 ± 0.07 µm (5FU-NS) and 1.138 ± 0.54 µm (5FU-MS). The fraction of respirable particles (FR%) were 76.84 ± 0.07% (5FU-NS) and 55.01 ± 2.91% (5FU-MS). The in vitro mucoadhesive properties exhibited significant adhesion efficiency in the presence of Methocel™ F4M. 5FU-MS and 5FU-NS were tested for their cytotoxic action on melanoma cancer cells (A2058 and A375) and both showed a cytotoxic effect similar to 5FU pure at concentrations of 4.3 and 1.7-fold lower, respectively. PMID:29385692
Time-dependent theoretical treatments of the dynamics of electrons and nuclei in molecular systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deumens, E.; Diz, A.; Longo, R.
1994-07-01
An overview is presented of methods for time-dependent treatments of molecules as systems of electrons and nuclei. The theoretical details of these methods are reviewed and contrasted in the light of a recently developed time-dependent method called electron-nuclear dynamics. Electron-nuclear dynamics (END) is a formulation of the complete dynamics of electrons and nuclei of a molecular system that eliminates the necessity of constructing potential-energy surfaces. Because of its general formulation, it encompasses many aspects found in other formulations and can serve as a didactic device for clarifying many of the principles and approximations relevant in time-dependent treatments of molecular systems.more » The END equations are derived from the time-dependent variational principle applied to a chosen family of efficiently parametrized approximate state vectors. A detailed analysis of the END equations is given for the case of a single-determinantal state for the electrons and a classical treatment of the nuclei. The approach leads to a simple formulation of the fully nonlinear time-dependent Hartree-Fock theory including nuclear dynamics. The nonlinear END equations with the [ital ab] [ital initio] Coulomb Hamiltonian have been implemented at this level of theory in a computer program, ENDyne, and have been shown feasible for the study of small molecular systems. Implementation of the Austin Model 1 semiempirical Hamiltonian is discussed as a route to large molecular systems. The linearized END equations at this level of theory are shown to lead to the random-phase approximation for the coupled system of electrons and nuclei. The qualitative features of the general nonlinear solution are analyzed using the results of the linearized equations as a first approximation. Some specific applications of END are presented, and the comparison with experiment and other theoretical approaches is discussed.« less
ERIC Educational Resources Information Center
Connell, David B.
1982-01-01
General systems theory provides a theoretical framework for understanding stress and formulating problem-solving strategies. Both individuals and schools are systems, and general systems theory enables one to ask whether they are operating harmoniously and communicating effectively. (Author/RW)
Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation
NASA Astrophysics Data System (ADS)
Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro
2018-02-01
We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.
Sedikides, Constantine; Gebauer, Jochen E
2010-02-01
In a meta-analysis, the authors test the theoretical formulation that religiosity is a means for self-enhancement. The authors operationalized self-enhancement as socially desirable responding (SDR) and focused on three facets of religiosity: intrinsic, extrinsic, and religion-as-quest. Importantly, they assessed two moderators of the relation between SDR and religiosity. Macro-level culture reflected countries that varied in degree of religiosity (from high to low: United States, Canada, United Kingdom). Micro-level culture reflected U.S. universities high (Christian) versus low (secular) on religiosity. The results were generally consistent with the theoretical formulation. Both macro-level and micro-level culture moderated the relation between SDR and religiosity: This relation was more positive in samples that placed higher value on religiosity (United States > Canada > United Kingdom; Christian universities > secular universities). The evidence suggests that religiosity is partly in the service of self-enhancement.
Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.
Fung, Wai-keung; Liu, Yun-hui
2003-12-01
Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems
Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang
2016-01-01
With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach. PMID:26999141
A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems.
Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang
2016-03-17
With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach.
Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John
2007-01-01
The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.
Interpersonal psychotherapy for depressed adolescents (IPT-A).
Brunstein-Klomek, Anat; Zalsman, Gil; Mufson, Laura
2007-01-01
Recently the Food and Drug Administration (FDA) published a black box warning on the use of serotonin receptor reuptake inhibitors for adolescent depression. This situation makes the non-pharmacological therapeutic alternatives more relevant than ever before. The aim of this review is to introduce the theoretical formulation, practical application and efficacy studies of Interpersonal Psychotherapy for depressed adolescents (IPT-A). A review is offered of published papers in peer-reviewed journals, books and edited chapters using Medline and PsychInfo publications between 1966 and February 2005. IPT-A is an evidence-based psychotherapy for depressed adolescents in both hospital-based and community outpatient settings. IPT-A is a brief and efficient therapy for adolescent depression. Training programs for child psychologists and psychiatrists are recommended.
Research and Development of High-performance Explosives
Cornell, Rodger; Wrobel, Erik; Anderson, Paul E.
2016-01-01
Developmental testing of high explosives for military applications involves small-scale formulation, safety testing, and finally detonation performance tests to verify theoretical calculations. small-scale For newly developed formulations, the process begins with small-scale mixes, thermal testing, and impact and friction sensitivity. Only then do subsequent larger scale formulations proceed to detonation testing, which will be covered in this paper. Recent advances in characterization techniques have led to unparalleled precision in the characterization of early-time evolution of detonations. The new technique of photo-Doppler velocimetry (PDV) for the measurement of detonation pressure and velocity will be shared and compared with traditional fiber-optic detonation velocity and plate-dent calculation of detonation pressure. In particular, the role of aluminum in explosive formulations will be discussed. Recent developments led to the development of explosive formulations that result in reaction of aluminum very early in the detonation product expansion. This enhanced reaction leads to changes in the detonation velocity and pressure due to reaction of the aluminum with oxygen in the expanding gas products. PMID:26966969
Kawakami, Kohsaku
2012-05-01
New chemical entities are required to possess physicochemical characteristics that result in acceptable oral absorption. However, many promising candidates need physicochemical modification or application of special formulation technology. This review discusses strategies for overcoming physicochemical problems during the development at the preformulation and formulation stages with emphasis on overcoming the most typical problem, low solubility. Solubility of active pharmaceutical ingredients can be improved by employing metastable states, salt forms, or cocrystals. Since the usefulness of salt forms is well recognized, it is the normal strategy to select the most suitable salt form through extensive screening in the current developmental study. Promising formulation technologies used to overcome the low solubility problem include liquid-filled capsules, self-emulsifying formulations, solid dispersions, and nanosuspensions. Current knowledge for each formulation is discussed from both theoretical and practical viewpoints, and their advantages and disadvantages are presented. Copyright © 2012 Elsevier B.V. All rights reserved.
Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.
2011-01-01
Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.
Multiplicative Multitask Feature Learning
Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu
2016-01-01
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735
Fly-by-feel aeroservoelasticity
NASA Astrophysics Data System (ADS)
Suryakumar, Vishvas Samuel
Recent experiments have suggested a strong correlation between local flow features on the airfoil surface such as the leading edge stagnation point (LESP), transition or the flow separation point with global integrated quantities such as aerodynamic lift. "Fly-By-Feel" refers to a physics-based sensing and control framework where local flow features are tracked in real-time to determine aerodynamic loads. This formulation offers possibilities for the development of robust, low-order flight control architectures. An essential contribution towards this objective is the theoretical development showing the direct relationship of the LESP with circulation for small-amplitude, unsteady, airfoil maneuvers. The theory is validated through numerical simulations and wind tunnel tests. With the availability of an aerodynamic observable, a low-order, energy-based control formulation is derived for aeroelastic stabilization and gust load alleviation. The sensing and control framework is implemented on the Nonlinear Aeroelastic Test Apparatus at Texas A&M University. The LESP is located using hot-film sensors distributed around the wing leading edge. Stabilization of limit cycle oscillations exhibited by a nonlinear wing section is demonstrated in the presence of gusts. Aeroelastic stabilization is also demonstrated on a flying wing configuration exhibiting body freedom flutter through numerical simulations.
Khetan, Abhishek; Krishnamurthy, Dilip; Viswanathan, Venkatasubramanian
2018-03-20
One route toward sustainable land and aerial transportation is based on electrified vehicles. To enable electrification in transportation, there is a need for high-energy-density batteries, and this has led to an enormous interest in lithium-oxygen batteries. Several critical challenges remain with respect to realizing a practical lithium-oxygen battery. In this article, we present a detailed overview of theoretical efforts to formulate design principles for identifying stable electrolytes and electrodes with the desired functionality and stability. We discuss design principles relating to electrolytes and the additional stability challenges that arise at the cathode-electrolyte interface. Based on a thermodynamic analysis, we discuss two important requirements for the cathode: the ability to nucleate the desired discharge product, Li[Formula: see text]O[Formula: see text], and the ability to selectively activate only this discharge product while suppressing lithium oxide, the undesired secondary discharge product. We propose preliminary guidelines for determining the chemical stability of the electrode and illustrate the challenge associated with electrode selection using the examples of carbon cathodes and transition metals. We believe that a synergistic design framework for identifying electrolyte-electrode formulations is needed to realize a practical Li-O[Formula: see text] battery.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
NASA Astrophysics Data System (ADS)
Xu-Xu, J.; Barrero-Gil, A.; Velazquez, A.
2015-11-01
This paper presents a theoretical study of the coupling between a vortex-induced vibration (VIV) cylindrical resonator and its associated linear electromagnetic generator. The two-equation mathematical model is based on a dual-mass formulation in which the dominant masses are the stator and translator masses of the generator. The fluid-structure interaction implemented in the model equations follows the so-called ‘advanced forcing model’ whose closure relies on experimental data. The rationale to carry out the study is the fact that in these types of configurations there is a two-way interaction between the moving parts in such a way that their motions influence each other simultaneously, thereby affecting the energy actually harvested. It is believed that instead of mainly resorting to complementary numerical simulations, a theoretical model can shed some light on the nature of the interaction and, at the same time, provide scaling laws that can be used for practical design and optimization purposes. It has been found that the proposed configuration has a maximum hydrodynamic to mechanical to electrical conversion efficiency (based on the VIV resonator oscillation amplitude) of 8%. For a cylindrical resonator 10 cm long with a 2 cm diameter, this translates into an output power of 20 to 160 mW for water stream velocities in the range from 0.5 to 1 m s-1.
Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Kara, Imdat; Derya, Tusan
2011-09-01
The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.
Tangwa, G B
2004-02-01
In this paper, the author attempts to explore some of the problems connected with the formulation and application of international biomedical ethical guidelines, with particular reference to Africa. Recent attempts at revising and updating some international medical ethical guidelines have been bedevilled by intractable controversies and wrangling regarding both the content and formulation. From the vantage position of relative familiarity with both African and Western contexts, and the privilege of having been involved in the revision and updating of one of the international ethical guidelines, the author reflects broadly on these issues and attempts prescribing an approach from both the theoretical and practical angles liable to mitigate, if not completely eliminate, some of the problems and difficulties.
[Study on the optimal extraction process of chaihushugan powder].
Wang, Chun-yan; Zhang, Wan-ming; Zhang, Dan-shen; An, Fang; Tian, Jia-ming
2009-11-01
To study the optimal extraction process of chaihushugan powder by orthogonal design. RP-HPLC method was developed for the determination of saikosaponin a, ferulic acid, hesperidin and paeoniflorin in chaihushugan powder. The contents of the components and the extraction yield were selected as assessment indices. Four factors were study by L9 (3(4)), including the alcohol concentration, amount of alcohol, duration of extraction and times of extraction. The optimal extracting condition was 80% alcohol consumed as 10 times of crude herb amount, and extracting two times for 90 min each time. This study supplies theoretical base for the development of chaihushugan powder formulation.
Initial Development and Validation of the Mexican Intercultural Competence Scale
Torres, Lucas
2013-01-01
The current project sought to develop the Mexican Intercultural Competence Scale (MICS), which assesses group-specific skills and attributes that facilitate effective cultural interactions, among adults of Mexican descent. Study 1 involved an Exploratory Factor Analysis (N = 184) that identified five factors including Ambition/Perseverance, Networking, the Traditional Latino Culture, Family Relationships, and Communication. In Study 2, a Confirmatory Factor Analysis provided evidence for the 5-factor model for adults of Mexican origin living in the Midwest (N = 341) region of the U.S. The general findings are discussed in terms of a competence-based formulation of cultural adaptation and include theoretical and clinical implications. PMID:24058890
Initial Development and Validation of the Mexican Intercultural Competence Scale.
Torres, Lucas
2013-01-01
The current project sought to develop the Mexican Intercultural Competence Scale (MICS), which assesses group-specific skills and attributes that facilitate effective cultural interactions, among adults of Mexican descent. Study 1 involved an Exploratory Factor Analysis ( N = 184) that identified five factors including Ambition/Perseverance, Networking, the Traditional Latino Culture, Family Relationships, and Communication. In Study 2, a Confirmatory Factor Analysis provided evidence for the 5-factor model for adults of Mexican origin living in the Midwest ( N = 341) region of the U.S. The general findings are discussed in terms of a competence-based formulation of cultural adaptation and include theoretical and clinical implications.
Simple radiative transfer model for relationships between canopy biomass and reflectance
NASA Technical Reports Server (NTRS)
Park, J. K.; Deering, D. W.
1982-01-01
A modified Kubelka-Munk model has been utilized to derive useful equations for the analysis of apparent canopy reflectance. Based on the solution to the model simple working equations were formulated by employing reflectance characteristic parameters. The relationships derived show the asymptotic nature of reflectance data that is typically observed in remote sensing studies of plant biomass. They also establish the range of expected apparent canopy reflectance values for specific plant canopy types. The usefulness of the simplified equations was demonstrated by the exceptionally close fit of the theoretical curves to two separately acquired data sets for alfalfa and shortgrass prairie canopies.
Stability and sensitivity of ABR flow control protocols
NASA Astrophysics Data System (ADS)
Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong
1998-10-01
This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.
Smith-Osborne, Alexa; Felderhoff, Brandi
2014-01-01
Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.
The free-electron laser - Maxwell's equations driven by single-particle currents
NASA Technical Reports Server (NTRS)
Colson, W. B.; Ride, S. K.
1980-01-01
It is shown that if single particle currents are coupled to Maxwell's equations, the resulting set of self-consistent nonlinear equations describes the evolution of the electron beam and the amplitude and phase of the free-electron-laser field. The formulation is based on the slowly varying amplitude and phase approximation, and the distinction between microscopic and macroscopic scales, which distinguishes the microscopic bunching from the macroscopic pulse propagation. The capabilities of this new theoretical approach become apparent when its predictions for the ultrashort pulse free-electron laser are compared to experimental data; the optical pulse evolution, determined simply and accurately, agrees well with observations.
Analysis of a human phenomenon: self-concept.
LeMone, P
1991-01-01
This analysis of self-concept includes an examination of definitions, historical perspectives, theoretical basis, and closely related terms. Antecedents, consequences, defining attributes, and a definition were formulated based on the analysis. The purpose of the analysis was to provide support for the use of the label "self-concept" as a broad category that encompasses the self-esteem, identity, and body-image nursing diagnoses within Taxonomy I. This classification could allow the use of a broad diagnostic label to better describe conditions that necessitate nursing care. It may also further explain the relationships between and among those diagnoses that describe human responses to disturbance of any component of the self-concept.
[Medical power and the crisis in bonds of trust within contemporary medicine].
Azeredo, Yuri Nishijima; Schraiber, Lilia Blima
2016-01-01
Based on the Brazilian context, this paper addresses medical power in terms of the current conflicts in the intersubjective relationships that doctors establish in their work, conflicts considered here as a product of a crisis of trust connected to recent historical transformations in the medical practice. Reading these conflicts as questions of an ethical and moral order, we use Hanna Arendt's theoretical formulations to further analyze this crisis of trust. In this way, utilizing the concepts of "crisis," "tradition," "power," "authority," and "natality," we search for new meanings regarding these conflicts, enabling new paths and solutions that avoid nostalgia for the past.
Gradient and size effects on spinodal and miscibility gaps
NASA Astrophysics Data System (ADS)
Tsagrakis, Ioannis; Aifantis, Elias C.
2018-05-01
A thermodynamically consistent model of strain gradient elastodiffusion is developed. Its formulation is based on the enhancement of a robust theory of gradient elasticity, known as GRADELA, to account for a Cahn-Hilliard type of diffusion. Linear stability analysis is employed to determine the influence of concentration and strain gradients on the spinodal decomposition. For finite domains, spherically symmetric conditions are considered, and size effects on spinodal and miscibility gaps are discussed. The theoretical predictions are in agreement with the experimental trends, i.e., both gaps shrink as the grain diameter decreases and they are completely eliminated for crystals smaller than a critical size.
Acoustic characteristics of the medium with gradient change of impedance
NASA Astrophysics Data System (ADS)
Hu, Bo; Yang, Desen; Sun, Yu; Shi, Jie; Shi, Shengguo; Zhang, Haoyang
2015-10-01
The medium with gradient change of acoustic impedance is a new acoustic structure which developed from multiple layer structures. In this paper, the inclusion is introduced and a new set of equations is developed. It can obtain better acoustic properties based on the medium with gradient change of acoustic impedance. Theoretical formulation has been systematically addressed which demonstrates how the idea of utilizing this method. The sound reflection and absorption coefficients were obtained. At last, the validity and the correctness of this method are assessed by simulations. The results show that appropriate design of parameters of the medium can improve underwater acoustic properties.
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.
NASA Technical Reports Server (NTRS)
Cook, W. J.
1973-01-01
A theoretical study of heat transfer for zero pressure gradient hypersonic laminar boundary layers for various gases with particular application to the flows produced in an expansion tube facility was conducted. A correlation based on results obtained from solutions to the governing equations for five gases was formulated. Particular attention was directed toward the laminar boundary layer shock tube splitter plates in carbon dioxide flows generated by high speed shock waves. Computer analysis of the splitter plate boundary layer flow provided information that is useful in interpreting experimental data obtained in shock tube gas radiation studies.
Generalized fractional diffusion equations for subdiffusion in arbitrarily growing domains
NASA Astrophysics Data System (ADS)
Angstmann, C. N.; Henry, B. I.; McGann, A. V.
2017-10-01
The ubiquity of subdiffusive transport in physical and biological systems has led to intensive efforts to provide robust theoretical models for this phenomena. These models often involve fractional derivatives. The important physical extension of this work to processes occurring in growing materials has proven highly nontrivial. Here we derive evolution equations for modeling subdiffusive transport in a growing medium. The derivation is based on a continuous-time random walk. The concise formulation of these evolution equations requires the introduction of a new, comoving, fractional derivative. The implementation of the evolution equation is illustrated with a simple model of subdiffusing proteins in a growing membrane.
Tracer attenuation in groundwater
NASA Astrophysics Data System (ADS)
Cvetkovic, Vladimir
2011-12-01
The self-purifying capacity of aquifers strongly depends on the attenuation of waterborne contaminants, i.e., irreversible loss of contaminant mass on a given scale as a result of coupled transport and transformation processes. A general formulation of tracer attenuation in groundwater is presented. Basic sensitivities of attenuation to macrodispersion and retention are illustrated for a few typical retention mechanisms. Tracer recovery is suggested as an experimental proxy for attenuation. Unique experimental data of tracer recovery in crystalline rock compare favorably with the theoretical model that is based on diffusion-controlled retention. Non-Fickian hydrodynamic transport has potentially a large impact on field-scale attenuation of dissolved contaminants.
Amplified total internal reflection: theory, analysis, and demonstration of existence via FDTD.
Willis, Keely J; Schneider, John B; Hagness, Susan C
2008-02-04
The explanation of wave behavior upon total internal reflection from a gainy medium has defied consensus for 40 years. We examine this question using both the finite-difference time-domain (FDTD) method and theoretical analyses. FDTD simulations of a localized wave impinging on a gainy half space are based directly on Maxwell's equations and make no underlying assumptions. They reveal that amplification occurs upon total internal reflection from a gainy medium; conversely, amplification does not occur for incidence below the critical angle. Excellent agreement is obtained between the FDTD results and an analytical formulation that employs a new branch cut in the complex "propagation-constant" plane.
Static and dynamic response of a sandwich structure under axial compression
NASA Astrophysics Data System (ADS)
Ji, Wooseok
This thesis is concerned with a combined experimental and theoretical investigation of the static and dynamic response of an axially compressed sandwich structure. For the static response problem of sandwich structures, a two-dimensional mechanical model is developed to predict the global and local buckling of a sandwich beam, using classical elasticity. The face sheet and the core are assumed as linear elastic orthotropic continua in a state of planar deformation. General buckling deformation modes (periodic and non-periodic) of the sandwich beam are considered. On the basis of the model developed here, validation and accuracy of several previous theories are discussed for different geometric and material properties of a sandwich beam. The appropriate incremental stress and conjugate incremental finite strain measure for the instability problem of the sandwich beam, and the corresponding constitutive model are addressed. The formulation used in the commercial finite element package is discussed in relation to the formulation adopted in the theoretical derivation. The Dynamic response problem of a sandwich structure subjected to axial impact by a falling mass is also investigated. The dynamic counterpart of the celebrated Euler buckling problem is formulated first and solved by considering the case of a slender column that is impacted by a falling mass. A new notion, that of the time to buckle, "t*" is introduced, which is the corresponding critical quantity analogous to the critical load in static Euler buckling. The dynamic bifurcation buckling analysis is extended to thick sandwich structures using an elastic foundation model. A comprehensive set of impact test results of sandwich columns with various configurations are presented. Failure mechanisms and the temporal history of how a sandwich column responds to axial impact are discussed through the experimental results. The experimental results are compared against analytical dynamic buckling studies and finite element based simulation of the impact event.
From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.
Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T
2018-01-01
Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.
Coastal Planning for Sustainable Maritime Management
NASA Astrophysics Data System (ADS)
Hakim, F.; Santoso, E. B.; Supriharjo, R.
2017-08-01
The Kendari Bay has a unique asset as a tourist attraction for the residents of the city of Kendari. The coastal area with all its potential like as a green open space, mangrove forests, the play area, is still a main destination to attract visitors. The function of Kendari Bay area as a tourist attraction makes this area as a place that has potential as a center of the economic vibrant and social interaction. Unfortunately, the arrangement of the area has not been done so that the integrated development of the region is not optimal. Therefore, it is important to promote a concept of area development as a tourist destination of coastal areas in order to improve function of the area. The concept of the coastal development area of Kendari Bay as tourist areas is formulated by the development criteria that influence to capable of attracting tourists. The criteria is formulated by the factors that play a role in the development of tourist areas, further exploration by qualitative descriptive analysis based on the information respondents. Fixation of the results of the criteria development was done with descriptive analysis assessed based on theoretically references through literature and regulations regarding the criteria for the development of tourism. To formulating the concept of tourism development used qualitative descriptive analysis technique with validation using triangulation techniques. The concept of tourism development based on the potential of the region is divided into three zones, namely area development of the core zone, direct supporting zone and indirect supporting zone. The macro spatial concept is necessary for the development of the area through the improvement of accessibility to tourist attraction, while the micro spatial concept includes improvements and additions to the activity in each zone to provide the convenience facilities for the tourists.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Dependence and caring in clinical communication: The relevance of attachment and other theories
Salmon, Peter; Young, Bridget
2009-01-01
Objective Clinical relationships are usually asymmetric, being defined by patients’ dependence and practitioners’ care. Our aims are to: (i) identify literature that can contribute to theory for researching and teaching clinical communication from this perspective; (ii) highlight where theoretical development is needed; and (iii) test the utility of the emerging theory by identifying whether it leads to implications for educational practice. Methods Selective and critical review of research concerned with dependence and caring in clinical and non-clinical relationships. Results Attachment theory helps to understand patients’ need to seek safety in relationships with expert and authoritative practitioners but is of limited help in understanding practitioners’ caring. Different theories that formulate practitioners’ care as altruistic, rewarded by personal connection or as a contract indicate the potential importance of practitioners’ emotions, values and sense of role in understanding their clinical communication. Conclusion Extending the theoretical grounding of clinical communication can accommodate patients’ dependence and practitioners’ caring without return to medical paternalism. Practice implications A broader theoretical base will help educators to address the inherent subjectivity of clinical relationships, and researchers to distinguish scientific questions about how patients and clinicians are from normative questions about how they should be. PMID:19157761
Dependence and caring in clinical communication: the relevance of attachment and other theories.
Salmon, Peter; Young, Bridget
2009-03-01
Clinical relationships are usually asymmetric, being defined by patients' dependence and practitioners' care. Our aims are to: (i) identify literature that can contribute to theory for researching and teaching clinical communication from this perspective; (ii) highlight where theoretical development is needed; and (iii) test the utility of the emerging theory by identifying whether it leads to implications for educational practice. Selective and critical review of research concerned with dependence and caring in clinical and non-clinical relationships. Attachment theory helps to understand patients' need to seek safety in relationships with expert and authoritative practitioners but is of limited help in understanding practitioners' caring. Different theories that formulate practitioners' care as altruistic, rewarded by personal connection or as a contract indicate the potential importance of practitioners' emotions, values and sense of role in understanding their clinical communication. Extending the theoretical grounding of clinical communication can accommodate patients' dependence and practitioners' caring without return to medical paternalism. A broader theoretical base will help educators to address the inherent subjectivity of clinical relationships, and researchers to distinguish scientific questions about how patients and clinicians are from normative questions about how they should be.
Quantitative confirmation of diffusion-limited oxidation theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, K.T.; Clough, R.L.
1990-01-01
Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less
Vibrational response analysis of tires using a three-dimensional flexible ring-based model
NASA Astrophysics Data System (ADS)
Matsubara, Masami; Tajiri, Daiki; Ise, Tomohiko; Kawamura, Shozo
2017-11-01
Tire vibration characteristics influence noise, vibration, and harshness. Hence, there have been many investigations of the dynamic responses of tires. In this paper, we present new formulations for the prediction of tire tread vibrations below 150 Hz using a three-dimensional flexible ring-based model. The ring represents the tread including the belt, and the springs represent the tire sidewall stiffness. The equations of motion for lateral, longitudinal, and radial vibration on the tread are derived based on the assumption of inextensional deformation. Many of the associated numerical parameters are identified from experimental tests. Unlike most studies of flexible ring models, which mainly discussed radial and circumferential vibration, this study presents steady response functions concerning not only radial and circumferential but also lateral vibration using the three-dimensional flexible ring-based model. The results of impact tests described confirm the theoretical findings. The results show reasonable agreement with the predictions.
Reconciling Verbal and Nonverbal Models of Dyadic Communication
ERIC Educational Resources Information Center
Firestone, Ira J.
1977-01-01
This paper examines two distinct theoretical descriptions of dyadic communication, the distance-equilibrium and reciprocity formulations, and shows that they carry divergent implications for changes that can occur in interpersonal relations. Presented at the Western Psychological Association meeting, April 1975. (Author)
TRANSLATIONAL INVARIANCE IN NUCLEATION THEORIES: THEORETICAL FORMULATION. (R826768)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Stress, deformation, conservation, and rheology: a survey of key concepts in continuum mechanics
Major, J.J.
2013-01-01
This chapter provides a brief survey of key concepts in continuum mechanics. It focuses on the fundamental physical concepts that underlie derivations of the mathematical formulations of stress, strain, hydraulic head, pore-fluid pressure, and conservation equations. It then shows how stresses are linked to strain and rates of distortion through some special cases of idealized material behaviors. The goal is to equip the reader with a physical understanding of key mathematical formulations that anchor continuum mechanics in order to better understand theoretical studies published in geomorphology.
1979-01-01
syn- thesis proceed s by ignoring unacceptable syntax or other errors , pro- tection against subsequent execution of a faulty reaction scheme can be...resulting TAPE9 . During subroutine syn thesis and reaction processing, a search is made (fo r each secondary electron collision encountered) to...program library, which can be cat- alogued and saved if any future specialized modifications (beyond the scope of the syn thesis capability of LASER
A game theoretic approach to a finite-time disturbance attenuation problem
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1991-01-01
A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.
Soares, Cássia Baldini; Santos, Vilmar Ezequiel Dos; Campos, Célia Maria Sivalli; Lachtim, Sheila Aparecida Ferreira; Campos, Fernanda Cristina
2011-12-01
We propose from the Marxist perspective of the construction of knowledge, a theoretical and methodological framework for understanding social values by capturing everyday representations. We assume that scientific research brings together different dimensions: epistemological, theoretical and methodological that consistently to the other instances, proposes a set of operating procedures and techniques for capturing and analyzing the reality under study in order to expose the investigated object. The study of values reveals the essentiality of the formation of judgments and choices, there are values that reflect the dominant ideology, spanning all social classes, but there are values that reflect class interests, these are not universal, they are formed in relationships and social activities. Basing on the Marxist theory of consciousness, representations are discursive formulations of everyday life - opinion or conviction - issued by subjects about their reality, being a coherent way of understanding and exposure social values: focus groups show is suitable for grasping opinions while interviews show potential to expose convictions.
Intraspecific scaling laws of vascular trees.
Huo, Yunlong; Kassab, Ghassan S
2012-01-07
A fundamental physics-based derivation of intraspecific scaling laws of vascular trees has not been previously realized. Here, we provide such a theoretical derivation for the volume-diameter and flow-length scaling laws of intraspecific vascular trees. In conjunction with the minimum energy hypothesis, this formulation also results in diameter-length, flow-diameter and flow-volume scaling laws. The intraspecific scaling predicts the volume-diameter power relation with a theoretical exponent of 3, which is validated by the experimental measurements for the three major coronary arterial trees in swine (where a least-squares fit of these measurements has exponents of 2.96, 3 and 2.98 for the left anterior descending artery, left circumflex artery and right coronary artery trees, respectively). This scaling law as well as others agrees very well with the measured morphometric data of vascular trees in various other organs and species. This study is fundamental to the understanding of morphological and haemodynamic features in a biological vascular tree and has implications for vascular disease.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2018-01-01
This study applies the theoretical findings of circularly-symmetric complex normal ratio distribution Yan and Ren (2016) [1,2] to transmissibility-based modal analysis from a statistical viewpoint. A probabilistic model of transmissibility function in the vicinity of the resonant frequency is formulated in modal domain, while some insightful comments are offered. It theoretically reveals that the statistics of transmissibility function around the resonant frequency is solely dependent on 'noise-to-signal' ratio and mode shapes. As a sequel to the development of the probabilistic model of transmissibility function in modal domain, this study poses the process of modal identification in the context of Bayesian framework by borrowing a novel paradigm. Implementation issues unique to the proposed approach are resolved by Lagrange multiplier approach. Also, this study explores the possibility of applying Bayesian analysis in distinguishing harmonic components and structural ones. The approaches are verified through simulated data and experimentally testing data. The uncertainty behavior due to variation of different factors is also discussed in detail.
Influence of dispatching rules on average production lead time for multi-stage production systems.
Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus
2013-08-01
In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.
Organizing the Confusion Surrounding Workaholism: New Structure, Measure, and Validation
Shkoler, Or; Rabenu, Edna; Vasiliu, Cristinel; Sharoni, Gil; Tziner, Aharon
2017-01-01
Since “workaholism” was coined, a considerable body of research was conducted to shed light on its essence. After at least 40 years of studying this important phenomenon, a large variety of definitions, conceptualizations, and measures emerged. In order to try and bring more integration and consensus to this construct, the current research was conducted in two phases. We aimed to formulate a theoretical definitional framework for workaholism, capitalizing upon the Facet Theory Approach. Two basic facets were hypothesized: A. Modalities of workaholism, with three elements: cognitive, emotional, and instrumental; and B. Resources of workaholism with two elements: time and effort. Based on this definitional framework, a structured questionnaire was conceived. In the first phase, the new measure was validated with an Israeli sample comparing two statistical procedures; Factor Analysis (FA) and Smallest Space Analysis (SSA). In the second phase, we aimed to replicate the findings, and to contrast the newly-devised questionnaire with other extant workaholism measures, with a Romanian sample. Theoretical implications and future research suggestions are discussed. PMID:29097989
Evaporation of LOX under supercritical and subcritical conditions
NASA Technical Reports Server (NTRS)
Yang, A. S.; Hsieh, W. H.; Kuo, K. K.; Brown, J. J.
1993-01-01
The evaporation of LOX under supercritical and subcritical conditions was studied experimentally and theoretically. In experiments, the evaporation rate and surface temperature were measured for LOX strand vaporizing in helium environments at pressures ranging from 5 to 68 atmospheres. Gas sampling and chromatography analysis were also employed to profile the gas composition above the LOX surface for the purpose of model validation. A comprehensive theoretical model was formulated and solved numerically to simulate the evaporation process of LOX at high pressures. The model was based on the conservation equations of mass, momentum, energy, and species concentrations for a multicomponent system, with consideration of gravitational body force, solubility of ambient gases in liquid, and variable thermophysical properties. Good agreement between predictions and measured oxygen mole fraction profiles was obtained. The effect of pressure on the distribution of the Lewis number, as well as the effect of variable diffusion coefficient, were further examined to elucidate the high-pressure transport behavior exhibited in the LOX vaporization process.
Geometry of quantum Hall states: Gravitational anomaly and transport coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Can, Tankut, E-mail: tcan@scgp.stonybrook.edu; Laskin, Michael; Wiegmann, Paul B.
2015-11-15
We show that universal transport coefficients of the fractional quantum Hall effect (FQHE) can be understood as a response to variations of spatial geometry. Some transport properties are essentially governed by the gravitational anomaly. We develop a general method to compute correlation functions of FQH states in a curved space, where local transformation properties of these states are examined through local geometric variations. We introduce the notion of a generating functional and relate it to geometric invariant functionals recently studied in geometry. We develop two complementary methods to study the geometry of the FQHE. One method is based on iteratingmore » a Ward identity, while the other is based on a field theoretical formulation of the FQHE through a path integral formalism.« less
Metasurface-based anti-reflection coatings at optical frequencies
NASA Astrophysics Data System (ADS)
Monti, Alessio; Alù, Andrea; Toscano, Alessandro; Bilotti, Filiberto
2018-05-01
In this manuscript, we propose a metasurface approach for the reduction of electromagnetic reflection from an arbitrary air‑dielectric interface. The proposed technique exploits the exotic optical response of plasmonic nanoparticles to achieve complete cancellation of the field reflected by a dielectric substrate by means of destructive interference. Differently from other, earlier anti-reflection approaches based on nanoparticles, our design scheme is supported by a simple transmission-line formulation that allows a closed-form characterization of the anti-reflection performance of a nanoparticle array. Furthermore, since the working principle of the proposed devices relies on an average effect that does not critically depend on the array geometry, our approach enables low-cost production and easy scalability to large sizes. Our theoretical considerations are supported by full-wave simulations confirming the effectiveness of this design principle.
Using Wavelet Bases to Separate Scales in Quantum Field Theory
NASA Astrophysics Data System (ADS)
Michlin, Tracie L.
This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.
Scalable algorithms for three-field mixed finite element coupled poromechanics
NASA Astrophysics Data System (ADS)
Castelletto, Nicola; White, Joshua A.; Ferronato, Massimiliano
2016-12-01
We introduce a class of block preconditioners for accelerating the iterative solution of coupled poromechanics equations based on a three-field formulation. The use of a displacement/velocity/pressure mixed finite-element method combined with a first order backward difference formula for the approximation of time derivatives produces a sequence of linear systems with a 3 × 3 unsymmetric and indefinite block matrix. The preconditioners are obtained by approximating the two-level Schur complement with the aid of physically-based arguments that can be also generalized in a purely algebraic approach. A theoretical and experimental analysis is presented that provides evidence of the robustness, efficiency and scalability of the proposed algorithm. The performance is also assessed for a real-world challenging consolidation experiment of a shallow formation.
NASA Astrophysics Data System (ADS)
Schiff, Dominique
I would like to write a few words, in memory of Volodya, for the occasion of this workshop which is organized under his name and will start by publicly remembering his essential presence in our theoretical physics world. We had in Orsay (LPT, theoretical physics laboratory) the incredible luck to meet him in 1992 for the first time when we could finally invite this famous and celebrated figure of modern particle physics. He gave a series of lectures this year: Orsay lectures on confinement-in which he mainly developed the picture of confinement based on light quarks that led to many discussions which contributed to open the road of the search actually still going on...The way he was giving his talks nobody will forget. He always started by describing the field he would talk about in a very passionate and extraordinary way: he would say: "I have a picture" which would force even the most far away spectators to participate in an actively engaged vision of the problem he was talking about. He was building with enthusiasm the theoretical image which led to the result he wanted to show. He will remain in our memory as a rare model of intellectual passion. This led him to formulate in a unique way precious theoretical results. Thank you Volodya... Note from Publisher: This article contains the abstract only.
Modeling Flow in Porous Media with Double Porosity/Permeability.
NASA Astrophysics Data System (ADS)
Seyed Joodat, S. H.; Nakshatrala, K. B.; Ballarini, R.
2016-12-01
Although several continuum models are available to study the flow of fluids in porous media with two pore-networks [1], they lack a firm theoretical basis. In this poster presentation, we will present a mathematical model with firm thermodynamic basis and a robust computational framework for studying flow in porous media that exhibit double porosity/permeability. The mathematical model will be derived by appealing to the maximization of rate of dissipation hypothesis, which ensures that the model is in accord with the second law of thermodynamics. We will also present important properties that the solutions under the model satisfy, along with an analytical solution procedure based on the Green's function method. On the computational front, a stabilized mixed finite element formulation will be derived based on the variational multi-scale formalism. The equal-order interpolation, which is computationally the most convenient, is stable under this formulation. The performance of this formulation will be demonstrated using patch tests, numerical convergence study, and representative problems. It will be shown that the pressure and velocity profiles under the double porosity/permeability model are qualitatively and quantitatively different from the corresponding ones under the classical Darcy equations. Finally, it will be illustrated that the surface pore-structure is not sufficient in characterizing the flow through a complex porous medium, which pitches a case for using advanced characterization tools like micro-CT. References [1] G. I. Barenblatt, I. P. Zheltov, and I. N. Kochina, "Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks [strata]," Journal of Applied Mathematics and Mechanics, vol. 24, pp. 1286-1303, 1960.
Coherent and incoherent ultrasound backscatter from cell aggregates.
de Monchy, Romain; Destrempes, François; Saha, Ratan K; Cloutier, Guy; Franceschini, Emilie
2016-09-01
The effective medium theory (EMT) was recently developed to model the ultrasound backscatter from aggregating red blood cells [Franceschini, Metzger, and Cloutier, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 58, 2668-2679 (2011)]. The EMT assumes that aggregates can be treated as homogeneous effective scatterers, which have effective properties determined by the aggregate compactness and the acoustical characteristics of the cells and the surrounding medium. In this study, the EMT is further developed to decompose the differential backscattering cross section of a single cell aggregate into coherent and incoherent components. The coherent component corresponds to the squared norm of the average scattering amplitude from the effective scatterer, and the incoherent component considers the variance of the scattering amplitude (i.e., the mean squared norm of the fluctuation of the scattering amplitude around its mean) within the effective scatterer. A theoretical expression for the incoherent component based on the structure factor is proposed and compared with another formulation based on the Gaussian direct correlation function. This theoretical improvement is assessed using computer simulations of ultrasound backscatter from aggregating cells. The consideration of the incoherent component based on the structure factor allows us to approximate the simulations satisfactorily for a product of the wavenumber times the aggregate radius kr ag around 2.
Dynamics and Control of Newtonian and Viscoelastic Fluids
NASA Astrophysics Data System (ADS)
Lieu, Binh K.
Transition to turbulence represents one of the most intriguing natural phenomena. Flows that are smooth and ordered may become complex and disordered as the flow strength increases. This process is known as transition to turbulence. In this dissertation, we develop theoretical and computational tools for analysis and control of transition and turbulence in shear flows of Newtonian, such as air and water, and complex viscoelastic fluids, such as polymers and molten plastics. Part I of the dissertation is devoted to the design and verification of sensor-free and feedback-based strategies for controlling the onset of turbulence in channel flows of Newtonian fluids. We use high fidelity simulations of the nonlinear flow dynamics to demonstrate the effectiveness of our model-based approach to flow control design. In Part II, we utilize systems theoretic tools to study transition and turbulence in channel flows of viscoelastic fluids. For flows with strong elastic forces, we demonstrate that flow fluctuations can experience significant amplification even in the absence of inertia. We use our theoretical developments to uncover the underlying physical mechanism that leads to this high amplification. For turbulent flows with polymer additives, we develop a model-based method for analyzing the influence of polymers on drag reduction. We demonstrate that our approach predicts drag reducing trends observed in full-scale numerical simulations. In Part III, we develop mathematical framework and computational tools for calculating frequency responses of spatially distributed systems. Using state-of-the-art automatic spectral collocation techniques and new integral formulation, we show that our approach yields more reliable and accurate solutions than currently available methods.
Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A
2014-01-01
This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient-centered communication and include considerations of metaphor, color, size, position, and timing of feedback. Ambient visual feedback from SSP holds promise as an acceptable means for facilitating empathic patient-centered nonverbal communication.
Mechanochemical mechanism for reaction of aluminium nano- and micrometre-scale particles.
Levitas, Valery I
2013-11-28
A recently suggested melt-dispersion mechanism (MDM) for fast reaction of aluminium (Al) nano- and a few micrometre-scale particles during fast heating is reviewed. Volume expansion of 6% during Al melting produces pressure of several GPa in a core and tensile hoop stresses of 10 GPa in an oxide shell. Such stresses cause dynamic fracture and spallation of the shell. After spallation, an unloading wave propagates to the centre of the particle and creates a tensile pressure of 3-8 GPa. Such a tensile pressure exceeds the cavitation strength of liquid Al and disperses the melt into small, bare clusters (fragments) that fly at a high velocity. Reaction of the clusters is not limited by diffusion through a pre-existing oxide shell. Some theoretical and experimental results related to the MDM are presented. Various theoretical predictions based on the MDM are in good qualitative and quantitative agreement with experiments, which resolves some basic puzzles in combustion of Al particles. Methods to control and improve reactivity of Al particles are formulated, which are exactly opposite to the current trends based on diffusion mechanism. Some of these suggestions have experimental confirmation.
A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems
Merrick, Kathryn E.; Shafi, Kamran
2013-01-01
An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797
A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.
Merrick, Kathryn E; Shafi, Kamran
2013-01-01
An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.
Tangwa, G
2004-01-01
In this paper, the author attempts to explore some of the problems connected with the formulation and application of international biomedical ethical guidelines, with particular reference to Africa. Recent attempts at revising and updating some international medical ethical guidelines have been bedevilled by intractable controversies and wrangling regarding both the content and formulation. From the vantage position of relative familiarity with both African and Western contexts, and the privilege of having been involved in the revision and updating of one of the international ethical guidelines, the author reflects broadly on these issues and attempts prescribing an approach from both the theoretical and practical angles liable to mitigate, if not completely eliminate, some of the problems and difficulties. PMID:14872078
School Principals in Spain: An Unstable Professional Identity
ERIC Educational Resources Information Center
Ritacco Real, Maximiliano; Bolívar Botía, Antonio
2018-01-01
The article proposes an emerging approach in research on school leadership, within the framework of the "International Successful School Principalship Project (ISSPP)", where one of the three key research strands is "Principals' identities". It formulates, first, the theoretical framework for the professional identity from a…
Interrelationship of Personality Disorders: Theoretical Formulations and Anecdotal Evidence.
ERIC Educational Resources Information Center
Vincent, Ken R.
1987-01-01
Attempts to define interrelationship of personality disorders. Discusses relationships between and among three major groupings of Diagnostic and Statistical Manual of Mental Disorders. Suggests that passive aggressive, avoidant, and borderline personality disorders serve as bridges between these groupings. Discusses placement within groupings with…
Teaching Critical Thinking by Examining Assumptions
ERIC Educational Resources Information Center
Yanchar, Stephen C.; Slife, Brent D.
2004-01-01
We describe how instructors can integrate the critical thinking skill of examining theoretical assumptions (e.g., determinism and materialism) and implications into psychology courses. In this instructional approach, students formulate questions that help them identify assumptions and implications, use those questions to identify and examine the…
Bayesian accounts of covert selective attention: A tutorial review.
Vincent, Benjamin T
2015-05-01
Decision making and optimal observer models offer an important theoretical approach to the study of covert selective attention. While their probabilistic formulation allows quantitative comparison to human performance, the models can be complex and their insights are not always immediately apparent. Part 1 establishes the theoretical appeal of the Bayesian approach, and introduces the way in which probabilistic approaches can be applied to covert search paradigms. Part 2 presents novel formulations of Bayesian models of 4 important covert attention paradigms, illustrating optimal observer predictions over a range of experimental manipulations. Graphical model notation is used to present models in an accessible way and Supplementary Code is provided to help bridge the gap between model theory and practical implementation. Part 3 reviews a large body of empirical and modelling evidence showing that many experimental phenomena in the domain of covert selective attention are a set of by-products. These effects emerge as the result of observers conducting Bayesian inference with noisy sensory observations, prior expectations, and knowledge of the generative structure of the stimulus environment.
A General No-Cloning Theorem for an infinite Multiverse
NASA Astrophysics Data System (ADS)
Gauthier, Yvon
2013-10-01
In this paper, I formulate a general no-cloning theorem which covers the quantum-mechanical and the theoretical quantum information cases as well as the cosmological multiverse theory. However, the main argument is topological and does not involve the peculiar copier devices of the quantum-mechanical and information-theoretic approaches to the no-cloning thesis. It is shown that a combinatorial set-theoretic treatment of the mathematical and physical spacetime continuum in cosmological or quantum-mechanical terms forbids an infinite (countable or uncountable) number of exact copies of finite elements (states) in the uncountable multiverse cosmology. The historical background draws on ideas from Weyl to Conway and Kochen on the free will theorem in quantum mechanics.
Schilling, Kristian; Krause, Frank
2015-01-01
Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum content of soluble aggregates in the original formulations of two antibodies. Limitations of the procedure are discussed.
Jensen, Ditte Krohn; Jensen, Linda Boye; Koocheki, Saeid; Bengtson, Lasse; Cun, Dongmei; Nielsen, Hanne Mørck; Foged, Camilla
2012-01-10
Matrix systems based on biocompatible and biodegradable polymers like the United States Food and Drug Administration (FDA)-approved polymer poly(DL-lactide-co-glycolide acid) (PLGA) are promising for the delivery of small interfering RNA (siRNA) due to favorable safety profiles, sustained release properties and improved colloidal stability, as compared to polyplexes. The purpose of this study was to design a dry powder formulation based on cationic lipid-modified PLGA nanoparticles intended for treatment of severe lung diseases by pulmonary delivery of siRNA. The cationic lipid dioleoyltrimethylammoniumpropane (DOTAP) was incorporated into the PLGA matrix to potentiate the gene silencing efficiency. The gene knock-down level in vitro was positively correlated to the weight ratio of DOTAP in the particles, and 73% silencing was achieved in the presence of 10% (v/v) serum at 25% (w/w) DOTAP. Optimal properties were found for nanoparticles modified with 15% (w/w) DOTAP, which reduced the gene expression with 54%. This formulation was spray-dried with mannitol into nanocomposite microparticles of an aerodynamic size appropriate for lung deposition. The spray-drying process did not affect the physicochemical properties of the readily re-dispersible nanoparticles, and most importantly, the in vitro gene silencing activity was preserved during spray-drying. The siRNA content in the powder was similar to the theoretical loading and the siRNA was intact, suggesting that the siRNA is preserved during the spray-drying process. Finally, X-ray powder diffraction analysis demonstrated that mannitol remained in a crystalline state upon spray-drying with PLGA nanoparticles suggesting that the sugar excipient might exert its stabilizing effect by sterical inhibition of the interactions between adjacent nanoparticles. This study demonstrates that spray-drying is an excellent technique for engineering dry powder formulations of siRNA nanoparticles, which might enable the local delivery of biologically active siRNA directly to the lung tissue. Copyright © 2011 Elsevier B.V. All rights reserved.
Higher-Order Fermi-Liquid Corrections for an Anderson Impurity Away from Half Filling
NASA Astrophysics Data System (ADS)
Oguri, Akira; Hewson, A. C.
2018-03-01
We study the higher-order Fermi-liquid relations of Kondo systems for arbitrary impurity-electron fillings, extending the many-body quantum theoretical approach of Yamada and Yosida. It includes, partly, a microscopic clarification of the related achievements based on Nozières' phenomenological description: Filippone, Moca, von Delft, and Mora [Phys. Rev. B 95, 165404 (2017), 10.1103/PhysRevB.95.165404]. In our formulation, the Fermi-liquid parameters such as the quasiparticle energy, damping, and transport coefficients are related to each other through the total vertex Γσ σ';σ'σ(ω ,ω';ω',ω ), which may be regarded as a generalized Landau quasiparticle interaction. We obtain exactly this function up to linear order with respect to the frequencies ω and ω' using the antisymmetry and analytic properties. The coefficients acquire additional contributions of three-body fluctuations away from half filling through the nonlinear susceptibilities. We also apply the formulation to nonequilibrium transport through a quantum dot, and clarify how the zero-bias peak evolves in a magnetic field.
Network representations of angular regions for electromagnetic scattering
2017-01-01
Network modeling in electromagnetics is an effective technique in treating scattering problems by canonical and complex structures. Geometries constituted of angular regions (wedges) together with planar layers can now be approached with the Generalized Wiener-Hopf Technique supported by network representation in spectral domain. Even if the network representations in spectral planes are of great importance by themselves, the aim of this paper is to present a theoretical base and a general procedure for the formulation of complex scattering problems using network representation for the Generalized Wiener Hopf Technique starting basically from the wave equation. In particular while the spectral network representations are relatively well known for planar layers, the network modelling for an angular region requires a new theory that will be developed in this paper. With this theory we complete the formulation of a network methodology whose effectiveness is demonstrated by the application to a complex scattering problem with practical solutions given in terms of GTD/UTD diffraction coefficients and total far fields for engineering applications. The methodology can be applied to other physics fields. PMID:28817573
Integrated control-system design via generalized LQG (GLQG) theory
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.
1989-01-01
Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.
Higher-Order Fermi-Liquid Corrections for an Anderson Impurity Away from Half Filling.
Oguri, Akira; Hewson, A C
2018-03-23
We study the higher-order Fermi-liquid relations of Kondo systems for arbitrary impurity-electron fillings, extending the many-body quantum theoretical approach of Yamada and Yosida. It includes, partly, a microscopic clarification of the related achievements based on Nozières' phenomenological description: Filippone, Moca, von Delft, and Mora [Phys. Rev. B 95, 165404 (2017)PRBMDO2469-995010.1103/PhysRevB.95.165404]. In our formulation, the Fermi-liquid parameters such as the quasiparticle energy, damping, and transport coefficients are related to each other through the total vertex Γ_{σσ^{'};σ^{'}σ}(ω,ω^{'};ω^{'},ω), which may be regarded as a generalized Landau quasiparticle interaction. We obtain exactly this function up to linear order with respect to the frequencies ω and ω^{'} using the antisymmetry and analytic properties. The coefficients acquire additional contributions of three-body fluctuations away from half filling through the nonlinear susceptibilities. We also apply the formulation to nonequilibrium transport through a quantum dot, and clarify how the zero-bias peak evolves in a magnetic field.
Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan
2013-08-22
Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Arteyeva, Natalia V; Azarov, Jan E
The aim of the study was to differentiate the effect of dispersion of repolarization (DOR) and action potential duration (APD) on T-wave parameters being considered as indices of DOR, namely, Tpeak-Tend interval, T-wave amplitude and T-wave area. T-wave was simulated in a wide physiological range of DOR and APD using a realistic rabbit model based on experimental data. A simplified mathematical formulation of T-wave formation was conducted. Both the simulations and the mathematical formulation showed that Tpeak-Tend interval and T-wave area are linearly proportional to DOR irrespectively of APD range, while T-wave amplitude is non-linearly proportional to DOR and inversely proportional to the minimal repolarization time, or minimal APD value. Tpeak-Tend interval and T-wave area are the most accurate DOR indices independent of APD. T-wave amplitude can be considered as an index of DOR when the level of APD is taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadhukhan, Banasree; Singh, Prashant; Nayak, Arabinda; Datta, Sujoy; Johnson, Duane D.; Mookerjee, Abhijit
2017-08-01
We present a real-space formulation for calculating the electronic structure and optical conductivity of random alloys based on Kubo-Greenwood formalism interfaced with augmented space recursion technique [Mookerjee, J. Phys. C 6, 1340 (1973), 10.1088/0022-3719/6/8/003] formulated with the tight-binding linear muffin-tin orbital basis with the van Leeuwen-Baerends corrected exchange potential [Singh, Harbola, Hemanadhan, Mookerjee, and Johnson, Phys. Rev. B 93, 085204 (2016), 10.1103/PhysRevB.93.085204]. This approach has been used to quantitatively analyze the effect of chemical disorder on the configuration averaged electronic properties and optical response of two-dimensional honeycomb siliphene SixC1 -x beyond the usual Dirac-cone approximation. We predicted the quantitative effect of disorder on both the electronic structure and optical response over a wide energy range, and the results are discussed in the light of the available experimental and other theoretical data. Our proposed formalism may open up a facile way for planned band-gap engineering in optoelectronic applications.
NASA Technical Reports Server (NTRS)
Kosmahl, H. G.
1982-01-01
A theoretical investigation of three dimensional relativistic klystron action is described. The relativistic axisymmetric equations of motion are derived from the time-dependent Lagrangian function for a charged particle in electromagnetic fields. An analytical expression of the fringing RF electric and magnetic fields within and in the vicinity of the interaction gap and the space-charge forces between axially and radially elastic deformable rings of charges are both included in the formulation. This makes an accurate computation of electron motion through the tunnel of the cavities and the drift tube spaces possible. Method of analysis is based on Lagrangian formulation. Bunching is computed using a disk model of electron stream in which the electron stream is divided into axisymmetric disks of equal charge and each disk is assumed to consist of a number of concentric rings of equal charges. The Individual representative groups of electrons are followed through the interaction gaps and drift tube spaces. Induced currents and voltages in interacting cavities are calculated by invoking the Shockley-Ramo theorem.
Modeling of combustion processes of stick propellants via combined Eulerian-Lagrangian approach
NASA Technical Reports Server (NTRS)
Kuo, K. K.; Hsieh, K. C.; Athavale, M. M.
1988-01-01
This research is motivated by the improved ballistic performance of large-caliber guns using stick propellant charges. A comprehensive theoretical model for predicting the flame spreading, combustion, and grain deformation phenomena of long, unslotted stick propellants is presented. The formulation is based upon a combined Eulerian-Lagrangian approach to simulate special characteristics of the two phase combustion process in a cartridge loaded with a bundle of sticks. The model considers five separate regions consisting of the internal perforation, the solid phase, the external interstitial gas phase, and two lumped parameter regions at either end of the stick bundle. For the external gas phase region, a set of transient one-dimensional fluid-dynamic equations using the Eulerian approach is obtained; governing equations for the stick propellants are formulated using the Lagrangian approach. The motion of a representative stick is derived by considering the forces acting on the entire propellant stick. The instantaneous temperature and stress fields in the stick propellant are modeled by considering the transient axisymmetric heat conduction equation and dynamic structural analysis.
Pikal, M J; Cardon, S; Bhugra, Chandan; Jameel, F; Rambhatla, S; Mascarenhas, W J; Akay, H U
2005-01-01
Theoretical models of the freeze-drying process are potentially useful to guide the design of a freeze-drying process as well as to obtain information not readily accessible by direct experimentation, such as moisture distribution and glass transition temperature, Tg, within a vial during processing. Previous models were either restricted to the steady state and/or to one-dimensional problems. While such models are useful, the restrictions seriously limit applications of the theory. An earlier work from these laboratories presented a nonsteady state, two-dimensional model (which becomes a three-dimensional model with an axis of symmetry) of sublimation and desorption that is quite versatile and allows the user to investigate a wide variety of heat and mass transfer problems in both primary and secondary drying. The earlier treatment focused on the mathematical details of the finite element formulation of the problem and on validation of the calculations. The objective of the current study is to provide the physical rational for the choice of boundary conditions, to validate the model by comparison of calculated results with experimental data, and to discuss several representative pharmaceutical applications. To validate the model and evaluate its utility in studying distribution of moisture and glass transition temperature in a representative product, calculations for a sucrose-based formulation were performed, and selected results were compared with experimental data. THEORETICAL MODEL: The model is based on a set of coupled differential equations resulting from constraints imposed by conservation of energy and mass, where numerical results are obtained using finite element analysis. Use of the model proceeds via a "modular software package" supported by Technalysis Inc. (Passage/ Freeze Drying). This package allows the user to define the problem by inputing shelf temperature, chamber pressure, container properties, product properties, and numerical analysis parameters required for the finite element analysis. Most input data are either available in the literature or may be easily estimated. Product resistance to water vapor flow, mass transfer coefficients describing secondary drying, and container heat transfer coefficients must normally be measured. Each element (i.e., each small subsystem of the product) may be assigned different values of product resistance to accurately describe the nonlinear resistance behavior often shown by real products. During primary drying, the chamber pressure and shelf temperature may be varied in steps. During secondary drying, the change in gas composition from pure water to mostly inert gas is calculated by the model from the instantaneous water vapor flux and the input pumping capacity of the freeze dryer. Comparison of the theoretical results with the experiment data for a 3% sucrose formulation is generally satisfactory. Primary drying times agree within two hours, and the product temperature vs. time curves in primary drying agree within about +/-1 degrees C. The residual moisture vs. time curve is predicted by the theory within the likely experimental error, and the lack of large variation in moisture within the vial (i.e., top vs. side vs. bottom) is also correctly predicted by theory. The theoretical calculations also provide the time variation of "Tg-T" during both primary and secondary drying, where T is product temperature and Tg is the glass transition temperature of the product phase. The calculations demonstrate that with a secondary drying protocol using a rapid ramp of shelf temperature, the product temperature does rise above Tg during early secondary drying, perhaps being a factor in the phenomenon known as "cake shrinkage." The theoretical results of in-process product temperature, primary drying time, and moisture content mapping and history are consistent with the experimental results, suggesting the theoretical model should be useful in process development and "trouble-shooting" applications.
Steady-state equation of water vapor sorption for CaCl2-based chemical sorbents and its application
Zhang, Haiquan; Yuan, Yanping; Sun, Qingrong; Cao, Xiaoling; Sun, Liangliang
2016-01-01
Green CaCl2-based chemical sorbent has been widely used in sorption refrigeration, air purification and air desiccation. Methods to improve the sorption rate have been extensively investigated, but the corresponding theoretical formulations have not been reported. In this paper, a sorption system of solid-liquid coexistence is established based on the hypothesis of steady-state sorption. The combination of theoretical analysis and experimental results indicates that the system can be described by steady-state sorption process. The steady-state sorption equation, μ = (η − γT) , was obtained in consideration of humidity, temperature and the surface area. Based on engineering applications and this equation, two methods including an increase of specific surface area and adjustment of the critical relative humidity (γ) for chemical sorbents, have been proposed to increase the sorption rate. The results indicate that the CaCl2/CNTs composite with a large specific surface area can be obtained by coating CaCl2 powder on the surface of carbon nanotubes (CNTs). The composite reached sorption equilibrium within only 4 h, and the sorption capacity was improved by 75% compared with pure CaCl2 powder. Furthermore, the addition of NaCl powder to saturated CaCl2 solution could significantly lower the solution’s γ. The sorption rate was improved by 30% under the same environment. PMID:27682811
Steady-state equation of water vapor sorption for CaCl2-based chemical sorbents and its application
NASA Astrophysics Data System (ADS)
Zhang, Haiquan; Yuan, Yanping; Sun, Qingrong; Cao, Xiaoling; Sun, Liangliang
2016-09-01
Green CaCl2-based chemical sorbent has been widely used in sorption refrigeration, air purification and air desiccation. Methods to improve the sorption rate have been extensively investigated, but the corresponding theoretical formulations have not been reported. In this paper, a sorption system of solid-liquid coexistence is established based on the hypothesis of steady-state sorption. The combination of theoretical analysis and experimental results indicates that the system can be described by steady-state sorption process. The steady-state sorption equation, μ = (η - γT) , was obtained in consideration of humidity, temperature and the surface area. Based on engineering applications and this equation, two methods including an increase of specific surface area and adjustment of the critical relative humidity (γ) for chemical sorbents, have been proposed to increase the sorption rate. The results indicate that the CaCl2/CNTs composite with a large specific surface area can be obtained by coating CaCl2 powder on the surface of carbon nanotubes (CNTs). The composite reached sorption equilibrium within only 4 h, and the sorption capacity was improved by 75% compared with pure CaCl2 powder. Furthermore, the addition of NaCl powder to saturated CaCl2 solution could significantly lower the solution’s γ. The sorption rate was improved by 30% under the same environment.
Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.
2016-01-01
We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489
Stroh, Mark; Addy, Carol; Wu, Yunhui; Stoch, S Aubrey; Pourkavoos, Nazaneen; Groff, Michelle; Xu, Yang; Wagner, John; Gottesdiener, Keith; Shadle, Craig; Wang, Hong; Manser, Kimberly; Winchell, Gregory A; Stone, Julie A
2009-03-01
We describe how modeling and simulation guided program decisions following a randomized placebo-controlled single-rising oral dose first-in-man trial of compound A where an undesired transient blood pressure (BP) elevation occurred in fasted healthy young adult males. We proposed a lumped-parameter pharmacokinetic-pharmacodynamic (PK/PD) model that captured important aspects of the BP homeostasis mechanism. Four conceptual units characterized the feedback PD model: a sinusoidal BP set point, an effect compartment, a linear effect model, and a system response. To explore approaches for minimizing the BP increase, we coupled the PD model to a modified PK model to guide oral controlled-release (CR) development. The proposed PK/PD model captured the central tendency of the observed data. The simulated BP response obtained with theoretical release rate profiles suggested some amelioration of the peak BP response with CR. This triggered subsequent CR formulation development; we used actual dissolution data from these candidate CR formulations in the PK/PD model to confirm a potential benefit in the peak BP response. Though this paradigm has yet to be tested in the clinic, our model-based approach provided a common rational framework to more fully utilize the limited available information for advancing the program.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
Field theoretic perspectives of the Wigner function formulation of the chiral magnetic effect
NASA Astrophysics Data System (ADS)
Wu, Yan; Hou, De-fu; Ren, Hai-cang
2017-11-01
We assess the applicability of the Wigner function formulation in its present form to the chiral magnetic effect and note some issues regarding the conservation and the consistency of the electric current in the presence of an inhomogeneous and time-dependent axial chemical potential. The problems are rooted in the ultraviolet divergence of the underlying field theory associated with the axial anomaly and can be fixed with the Pauli-Villars regularization of the Wigner function. The chiral magnetic current with a nonconstant axial chemical potential is calculated with the regularized Wigner function and the phenomenological implications are discussed.
A novel formulation for unsteady counterflow flames using a thermal-conductivity-weighted coordinate
NASA Astrophysics Data System (ADS)
Weiss, Adam D.; Vera, Marcos; Liñán, Amable; Sánchez, Antonio L.; Williams, Forman A.
2018-01-01
A general formulation is given for the description of reacting mixing layers in stagnation-type flows subject to both time-varying strain and pressure. The salient feature of the formulation is the introduction of a thermal-conductivity-weighted transverse coordinate that leads to a compact transport operator that facilitates numerical integration and theoretical analysis. For steady counterflow mixing layers, the associated transverse mass flux is shown to be effectively linear in terms of the new coordinate, so that the conservation equations for energy and chemical species uncouple from the mass and momentum conservation equations, thereby greatly simplifying the solution. Comparisons are shown with computations of diffusion flames with infinitely fast reaction using both the classic Howarth-Dorodnitzyn density-weighted coordinate and the new thermal-conductivity-weighted coordinate, illustrating the advantages of the latter. Also, as an illustrative application of the formulation to the computation of unsteady counterflows, the flame response to harmonically varying strain is examined in the linear limit.
Some Theoretical Aspects of Nonzero Sum Differential Games and Applications to Combat Problems
1971-06-01
the Equilibrium Solution . 7 Hamilton-Jacobi-Bellman Partial Differential Equations ............. .............. 9 Influence Function Differential...Linearly .......... ............ 18 Problem Statement .......... ............ 18 Formulation of LJB Equations, Influence Function Equations and the TPBVP...19 Control Lawe . . .. ...... ........... 21 Conditions for Influence Function Continuity along Singular Surfaces
Acting and Reacting: Youth's Behavior in Corrupt Educational Settings
ERIC Educational Resources Information Center
Sabic-El-Rayess, Amra
2014-01-01
With its broader employability to the issues of underperformance that may emerge in educational systems internationally, this empirical study redefines and expands Albert Hirschman's theory of voice, exit, and loyalty within higher education. The article formulates a new education-embedded theoretical framework that explains reactionary behaviors…
Michel Foucault's Theory of Rhetoric as Epistemic.
ERIC Educational Resources Information Center
Foss, Sandra K.; Gill, Ann
1987-01-01
Formulates a middle-level theory that explains the process by which rhetroic is epistemic, using Foucault's notion of the discursive formation as a starting point. Discusses five theoretical units derived from Foucault--discursive practices, rules, roles, power, and knowledge--and relationships among them. Analyzes Disneyland, using Foucault's…
Adlerian and Analytic Theory: A Case Presentation.
ERIC Educational Resources Information Center
Myers, Kathleen M.; Croake, James W.
1984-01-01
Makes a theoretical comparison between Adlerian and analytic formulations of family assessment in a case study involving a recently divorced couple and a child with encopresis. Discussed the family relationship in terms of object relations theory emphasizing intrapsychic experience, and Adlerian theory emphasizing the purposes of behavior. (JAC)
NASA Astrophysics Data System (ADS)
Iradat, R. D.; Alatas, F.
2017-09-01
Simple harmonic motion is considered as a relatively complex concept to be understood by students. This study attempts to implement laboratory activities that focus on solving contextual problems related to the concept. A group of senior high school students participated in this pre-experimental method from a group’s pretest-posttest research design. Laboratory activities have had a positive impact on improving students’ scientific skills, such as, formulating goals, conducting experiments, applying laboratory tools, and collecting data. Therefore this study has added to the theoretical and practical knowledge that needs to be considered to teach better complicated concepts in physics learning.
Hartog, Iris; Scherer-Rath, Michael; Kruizinga, Renske; Netjes, Justine; Henriques, José; Nieuwkerk, Pythia; Sprangers, Mirjam; van Laarhoven, Hanneke
2017-09-01
Falling seriously ill is often experienced as a life event that causes conflict with people's personal goals and expectations in life and evokes existential questions. This article presents a new humanities approach to the way people make meaning of such events and how this influences their quality of life. Incorporating theories on contingency, narrative identity, and quality of life, we developed a theoretical model entailing the concepts life event, worldview, ultimate life goals, experience of contingency, narrative meaning making, narrative integration, and quality of life. We formulate testable hypotheses and describe the self-report questionnaire that was developed based on the model.
Dynamic Models and Coordination Analysis of Reverse Supply Chain with Remanufacturing
NASA Astrophysics Data System (ADS)
Yan, Nina
In this paper, we establish a reverse chain system with one manufacturer and one retailer under demand uncertainties. Distinguishing between the recycling process of the retailer and the remanufacturing process of the manufacturer, we formulate a two-stage dynamic model for reverse supply chain based on remanufacturing. Using buyback contract as coordination mechanism and applying dynamic programming the optimal decision problems for each stage are analyzed. It concluded that the reverse supply chain system could be coordinated under the given condition. Finally, we carry out numerical calculations to analyze the expected profits for the manufacturer and the retailer under different recovery rates and recovery prices and the outcomes validate the theoretical analyses.
Modeling of vortex generated sound in solid propellant rocket motors
NASA Technical Reports Server (NTRS)
Flandro, G. A.
1980-01-01
There is considerable evidence based on both full scale firings and cold flow simulations that hydrodynamically unstable shear flows in solid propellant rocket motors can lead to acoustic pressure fluctuations of significant amplitude. Although a comprehensive theoretical understanding of this problem does not yet exist, procedures were explored for generating useful analytical models describing the vortex shedding phenomenon and the mechanisms of coupling to the acoustic field in a rocket combustion chamber. Since combustion stability prediction procedures cannot be successful without incorporation of all acoustic gains and losses, it is clear that a vortex driving model comparable in quality to the analytical models currently employed to represent linear combustion instability must be formulated.
Clement, Matthieu; Meunie, Andre
2010-01-01
The object of this article is to examine the relation between social inequalities and pollution. First of all we provide a survey demonstrating that, from a theoretical point of view, a decrease in inequality has an uncertain impact on the environment. Second, on the basis of these conceptual considerations, we propose an econometric analysis based on panel data (fixed-effects and dynamic panel data models) concerning developing and transition countries for the 1988-2003 period. We examine specifically the effect of inequality on the extent of local pollution (sulphur dioxide emissions and organic water pollution) by integrating the Gini index into the formulation of the environmental Kuznets' curve.
Yang, Shiju; Li, Chuandong; Huang, Tingwen
2016-03-01
The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
The report of the Gravity Field Workshop
NASA Astrophysics Data System (ADS)
Smith, D. E.
1982-04-01
A Gravity Field Workshop was convened to review the actions which could be taken prior to a GRAVSAT mission to improve the Earth's gravity field model. This review focused on the potential improvements in the Earth's gravity field which could be obtained using the current satellite and surface gravity data base. In particular, actions to improve the quality of the gravity field determination through refined measurement corrections, selected data augmentation and a more accurate reprocessing of the data were considered. In addition, recommendations were formulated which define actions which NASA should take to develop the necessary theoretical and computation techniques for gravity model determination and to use these approaches to improve the accuracy of the Earth's gravity model.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Nature-based supportive care opportunities: a conceptual framework.
Blaschke, Sarah; O'Callaghan, Clare C; Schofield, Penelope
2018-03-22
Given preliminary evidence for positive health outcomes related to contact with nature for cancer populations, research is warranted to ascertain possible strategies for incorporating nature-based care opportunities into oncology contexts as additional strategies for addressing multidimensional aspects of cancer patients' health and recovery needs. The objective of this study was to consolidate existing research related to nature-based supportive care opportunities and generate a conceptual framework for discerning relevant applications in the supportive care setting. Drawing on research investigating nature-based engagement in oncology contexts, a two-step analytic process was used to construct a conceptual framework for guiding nature-based supportive care design and future research. Concept analysis methodology generated new representations of understanding by extracting and synthesising salient concepts. Newly formulated concepts were transposed to findings from related research about patient-reported and healthcare expert-developed recommendations for nature-based supportive care in oncology. Five theoretical concepts (themes) were formulated describing patients' reasons for engaging with nature and the underlying needs these interactions address. These included: connecting with what is genuinely valued, distancing from the cancer experience, meaning-making and reframing the cancer experience, finding comfort and safety, and vital nurturance. Eight shared patient and expert recommendations were compiled, which address the identified needs through nature-based initiatives. Eleven additional patient-reported recommendations attend to beneficial and adverse experiential qualities of patients' nature-based engagement and complete the framework. The framework outlines salient findings about helpful nature-based supportive care opportunities for ready access by healthcare practitioners, designers, researchers and patients themselves. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Randomized Prediction Games for Adversarial Machine Learning.
Rota Bulo, Samuel; Biggio, Battista; Pillai, Ignazio; Pelillo, Marcello; Roli, Fabio
In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.
A comparative study on stress and compliance based structural topology optimization
NASA Astrophysics Data System (ADS)
Hailu Shimels, G.; Dereje Engida, W.; Fakhruldin Mohd, H.
2017-10-01
Most of structural topology optimization problems have been formulated and solved to either minimize compliance or weight of a structure under volume or stress constraints, respectively. Even if, a lot of researches are conducted on these two formulation techniques separately, there is no clear comparative study between the two approaches. This paper intends to compare these formulation techniques, so that an end user or designer can choose the best one based on the problems they have. Benchmark problems under the same boundary and loading conditions are defined, solved and results are compared based on these formulations. Simulation results shows that the two formulation techniques are dependent on the type of loading and boundary conditions defined. Maximum stress induced in the design domain is higher when the design domains are formulated using compliance based formulations. Optimal layouts from compliance minimization formulation has complex layout than stress based ones which may lead the manufacturing of the optimal layouts to be challenging. Optimal layouts from compliance based formulations are dependent on the material to be distributed. On the other hand, optimal layouts from stress based formulation are dependent on the type of material used to define the design domain. High computational time for stress based topology optimization is still a challenge because of the definition of stress constraints at element level. Results also shows that adjustment of convergence criterions can be an alternative solution to minimize the maximum stress developed in optimal layouts. Therefore, a designer or end user should choose a method of formulation based on the design domain defined and boundary conditions considered.
Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo
2017-06-26
Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.
On the phase lag of turbulent dissipation in rotating tidal flows
NASA Astrophysics Data System (ADS)
Zhang, Qianjiang; Wu, Jiaxue
2018-03-01
Field observations of rotating tidal flows in a shallow tidally swept sea reveal that a notable phase lag of both shear production and turbulent dissipation increases with height above the seafloor. These vertical delays of turbulent quantities are approximately equivalent in magnitude to that of squared mean shear. The shear production approximately equals turbulent dissipation over the phase-lag column, and thus a main mechanism of phase lag of dissipation is mean shear, rather than vertical diffusion of turbulent kinetic energy. By relating the phase lag of dissipation to that of the mean shear, a simple formulation with constant eddy viscosity is developed to describe the phase lag in rotating tidal flows. An analytical solution indicates that the phase lag increases linearly with height subjected to a combined effect of tidal frequency, Coriolis parameter and eddy viscosity. The vertical diffusion of momentum associated with eddy viscosity produces the phase lag of squared mean shear, and resultant delay of turbulent quantities. Its magnitude is inhibited by Earth's rotation. Furthermore, a theoretical formulation of the phase lag with a parabolic eddy viscosity profile can be constructed. A first-order approximation of this formulation is still a linear function of height, and its magnitude is approximately 0.8 times that with constant viscosity. Finally, the theoretical solutions of phase lag with realistic viscosity can be satisfactorily justified by realistic phase lags of dissipation.
The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.
de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M
2017-06-01
This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation patterns with the measured safety outcomes. Finally, results provided evidence of discriminant validity between the SCEQ and safety climate. We conclude that the SCEQ is a valid, reliable instrument supported by a theoretical framework, and it is useful to measure the enactment of safety culture in NPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances
NASA Astrophysics Data System (ADS)
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-01
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Fractional motion model for characterization of anomalous diffusion from NMR signals.
Fan, Yang; Gao, Jia-Hong
2015-07-01
Measuring molecular diffusion has been used to characterize the properties of living organisms and porous materials. NMR is able to detect the diffusion process in vivo and noninvasively. The fractional motion (FM) model is appropriate to describe anomalous diffusion phenomenon in crowded environments, such as living cells. However, no FM-based NMR theory has yet been established. Here, we present a general formulation of the FM-based NMR signal under the influence of arbitrary magnetic field gradient waveforms. An explicit analytic solution of the stretched exponential decay format for NMR signals with finite-width Stejskal-Tanner bipolar pulse magnetic field gradients is presented. Signals from a numerical simulation matched well with the theoretical prediction. In vivo diffusion-weighted brain images were acquired and analyzed using the proposed theory, and the resulting parametric maps exhibit remarkable contrasts between different brain tissues.
Fractional motion model for characterization of anomalous diffusion from NMR signals
NASA Astrophysics Data System (ADS)
Fan, Yang; Gao, Jia-Hong
2015-07-01
Measuring molecular diffusion has been used to characterize the properties of living organisms and porous materials. NMR is able to detect the diffusion process in vivo and noninvasively. The fractional motion (FM) model is appropriate to describe anomalous diffusion phenomenon in crowded environments, such as living cells. However, no FM-based NMR theory has yet been established. Here, we present a general formulation of the FM-based NMR signal under the influence of arbitrary magnetic field gradient waveforms. An explicit analytic solution of the stretched exponential decay format for NMR signals with finite-width Stejskal-Tanner bipolar pulse magnetic field gradients is presented. Signals from a numerical simulation matched well with the theoretical prediction. In vivo diffusion-weighted brain images were acquired and analyzed using the proposed theory, and the resulting parametric maps exhibit remarkable contrasts between different brain tissues.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-22
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Nishawala, Vinesh V.; Ostoja-Starzewski, Martin; Leamy, Michael J.; ...
2015-09-10
Peridynamics is a non-local continuum mechanics formulation that can handle spatial discontinuities as the governing equations are integro-differential equations which do not involve gradients such as strains and deformation rates. This paper employs bond-based peridynamics. Cellular Automata is a local computational method which, in its rectangular variant on interior domains, is mathematically equivalent to the central difference finite difference method. However, cellular automata does not require the derivation of the governing partial differential equations and provides for common boundary conditions based on physical reasoning. Both methodologies are used to solve a half-space subjected to a normal load, known as Lamb’smore » Problem. The results are compared with theoretical solution from classical elasticity and experimental results. Furthermore, this paper is used to validate our implementation of these methods.« less
Song, H Francis; Wang, Xiao-Jing
2014-12-01
Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.
NASA Astrophysics Data System (ADS)
Song, H. Francis; Wang, Xiao-Jing
2014-12-01
Small-world networks—complex networks characterized by a combination of high clustering and short path lengths—are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.
Understanding Developmental Reversals in False Memory: Reply to Ghetti (2008) and Howe (2008)
ERIC Educational Resources Information Center
Brainerd, C. J.; Reyna, V. F.; Ceci, S. J.; Holliday, R. E.
2008-01-01
S. Ghetti (2008) and M. L. Howe (2008) presented probative ideas for future research that will deepen scientific understanding of developmental reversals on false memory and establish boundary conditions for these counterintuitive patterns. Ghetti extended the purview of current theoretical principles by formulating hypotheses about how…
Tracing Bodylines: The Body in Feminist Poststructural Research
ERIC Educational Resources Information Center
Somerville, Margaret
2004-01-01
This paper traces body lines in feminist poststructural research by identifying the conditions under which research into the lived body can be brought into discursive relation with contemporary theoretical formulations of the body. It begins by identifying the erasure of the corporeal body in the somatophobia of essentialism and the exclusive…
The Eight-Step Method to Great Group Work
ERIC Educational Resources Information Center
Steward, Sally; Swango, Jill
2004-01-01
Many science teachers already understand the importance of cooperative learning in the classroom and during lab exercises. From a theoretical perspective, students working in groups learn teamwork and discussion techniques as well as how to formulate and ask questions amongst themselves. From a practical standpoint, group work saves precious…
Some Thoughts about Togetherness: An Introduction.
ERIC Educational Resources Information Center
van Oers, Bert; Hannikainen, Maritta
2001-01-01
Discusses the need to study the social interactive dimension of learning, attempting to formulate a definition of togetherness on a theoretical basis. Explores processes in early childhood that relate to understanding how children learn to maintain togetherness in their group activities, and how a strategy for togetherness may prepare children for…
The structure of shock wave in a gas consisting of ideally elastic, rigid spherical molecules
NASA Technical Reports Server (NTRS)
Cheremisin, F. G.
1972-01-01
Principal approaches are examined to the theoretical study of the shock layer structure. The choice of a molecular model is discussed and three procedures are formulated. These include a numerical calculation method, solution of the kinetic relaxation equation, and solution of the Boltzmann equation.
The Importance of Proving the Null
ERIC Educational Resources Information Center
Gallistel, C. R.
2009-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is…
Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.
ERIC Educational Resources Information Center
Mullen, Kenneth; Ennis, Daniel M.
1987-01-01
Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)
Collaborative Textbook Selection: A Case Study Leading to Practical and Theoretical Considerations
ERIC Educational Resources Information Center
Czerwionka, Lori; Gorokhovsky, Bridget
2015-01-01
This case study developed a collaborative approach to the selection of a Spanish language textbook. The collaborative process consisted of six steps, detailed in this article: team building, generating evaluation criteria, formulating a meaningful rubric, selecting prospective textbooks, calculating rubric results, and reflectively reviewing…
The Nature and Determinants of Intranet Discontinuance after Mandatory Adoption
ERIC Educational Resources Information Center
Cho, Inho
2008-01-01
This research examines post-adoption behavior (discontinuance versus continuance) with the context of Intranet use. Multiple theories are used as theoretical frameworks to extend information communication technology research to the case of post-adoption behavior. Three research questions and six sets of hypotheses are formulated to distinguish…
A theoretical investigation of ground effects on USB configurations
NASA Technical Reports Server (NTRS)
Lan, C. E.
1979-01-01
A formulation predicts the variation of circulation forces and jet reaction forces in ground proximity as a function of ground height. The predicted results agree well with available experimental data. It is shown that the wing-alone theory is not capable of predicting the ground effect for USB configurations.
A SCREENING MODEL FOR SIMULATING DNAPL FLOW AND TRANSPORT IN POROUS MEDIA: THEORETICAL DEVELOPMENT
There exists a need for a simple tool that will allow us to analyze a DNAPL contamination scenario from free-product release to transport of soluble constituents to downgradient receptor wells. The objective of this manuscript is to present the conceptual model and formulate the ...
Perlovsky, Leonid I
2016-01-01
Is it possible to turn psychology into "hard science"? Physics of the mind follows the fundamental methodology of physics in all areas where physics have been developed. What is common among Newtonian mechanics, statistical physics, quantum physics, thermodynamics, theory of relativity, astrophysics… and a theory of superstrings? The common among all areas of physics is a methodology of physics discussed in the first few lines of the paper. Is physics of the mind possible? Is it possible to describe the mind based on the few first principles as physics does? The mind with its variabilities and uncertainties, the mind from perception and elementary cognition to emotions and abstract ideas, to high cognition. Is it possible to turn psychology and neuroscience into "hard" sciences? The paper discusses established first principles of the mind, their mathematical formulations, and a mathematical model of the mind derived from these first principles, mechanisms of concepts, emotions, instincts, behavior, language, cognition, intuitions, conscious and unconscious, abilities for symbols, functions of the beautiful and musical emotions in cognition and evolution. Some of the theoretical predictions have been experimentally confirmed. This research won national and international awards. In addition to summarizing existing results the paper describes new development theoretical and experimental. The paper discusses unsolved theoretical problems as well as experimental challenges for future research.
Perlovsky, Leonid I.
2016-01-01
Is it possible to turn psychology into “hard science”? Physics of the mind follows the fundamental methodology of physics in all areas where physics have been developed. What is common among Newtonian mechanics, statistical physics, quantum physics, thermodynamics, theory of relativity, astrophysics… and a theory of superstrings? The common among all areas of physics is a methodology of physics discussed in the first few lines of the paper. Is physics of the mind possible? Is it possible to describe the mind based on the few first principles as physics does? The mind with its variabilities and uncertainties, the mind from perception and elementary cognition to emotions and abstract ideas, to high cognition. Is it possible to turn psychology and neuroscience into “hard” sciences? The paper discusses established first principles of the mind, their mathematical formulations, and a mathematical model of the mind derived from these first principles, mechanisms of concepts, emotions, instincts, behavior, language, cognition, intuitions, conscious and unconscious, abilities for symbols, functions of the beautiful and musical emotions in cognition and evolution. Some of the theoretical predictions have been experimentally confirmed. This research won national and international awards. In addition to summarizing existing results the paper describes new development theoretical and experimental. The paper discusses unsolved theoretical problems as well as experimental challenges for future research. PMID:27895558
The hydrodynamic description of pseudorapidity distributions at lower energies at BNL-RHIC
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Jin; Huang, Yan; Zhang, Hai-Li; Zhang, Yu
2017-04-01
The hot and dense matter produced in nucleus-nucleus collisions is supposed to expand according to unified hydrodynamics, one of the few theoretical models that can be worked out exactly. The solution is then used to formulate the rapidity distribution of charged particles frozen out from the fluid on the space-like hypersurface with a fixed temperature, T FO. A comparison is made between the theoretical predictions and the experimental measurements carried out by PHOBOS Collaboration in the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL) in different centrality Au-Au and Cu-Cu collisions at √ {s_{ {NN}}} = 19.6 and 22.4 GeV, respectively. The theoretical results are in good accordance with experimental data.
NASA Technical Reports Server (NTRS)
Everhart, Joel Lee
1988-01-01
A theoretical examination of the slotted-wall flow field is conducted to determine the appropriate wall pressure drop (or boundary condition) equation. This analysis improves the understanding of the fluid physics of these types of flow fields and helps in evaluating the uncertainties and limitations existing in previous mathematical developments. It is shown that the resulting slotted-wall boundary condition contains contributions from the airfoil-induced streamline curvature and the non-linear, quadratic, slot crossflow in addition to an often neglected linear term which results from viscous shearing in the slot. Existing and newly acquired experimental data are examined in the light of this formulation and theoretical developments.
Stout, Stephen M.; Nielsen, Jace; Welage, Lynda S.; Shea, Michael; Brook, Robert; Kerber, Kevin; Bleske, Barry E.
2010-01-01
Studies have demonstrated an influence of dosage release formulations on drug interactions and enantiomeric plasma concentrations. Metoprolol is a commonly used β-adrenergic antagonist metabolized by CYP2D6. The CYP2D6 inhibitor paroxetine has previously been shown to interact with metoprolol tartrate. This open-label, randomized, 4 phase crossover study assessed the potential differential effects of paroxetine on stereoselective pharmacokinetics of immediate release (IR) tartrate and extended release (ER) succinate metoprolol formulations. Ten healthy subjects received metoprolol IR (50 mg) and ER (100 mg) with and without paroxetine coadministration. Blood samples were collected over 24 hours for determination of metoprolol plasma enantiomer concentrations. Paroxetine coadministration significantly increased S and R metoprolol AUC0–24h by 4 and 5 fold, respectively for IR, and 3 and 4 fold, respectively for ER. S/R AUC ratios significantly decreased. These results demonstrate a pharmacokinetic interaction between paroxetine and both formulations of metoprolol. The interaction is greater with R metoprolol and stereoselective metabolism is lost. This could theoretically result in greater β-blockade and lost cardioselectivity. The magnitude of the interaction was similar between metoprolol formulations, which may be attributable to low doses / drug input rates employed. PMID:20400652
NASA Astrophysics Data System (ADS)
Baysal, Gulcin; Kalav, Berdan; Karagüzel Kayaoğlu, Burçak
2017-10-01
In the current study, it is aimed to determine the effect of pigment concentration on fastness and colour values of thermal and ultraviolet (UV) curable pigment printing on synthetic leather. For this purpose, thermal curable solvent-based and UV curable water-based formulations were prepared with different pigment concentrations (3, 5 and 7%) separately and applied by screen printing technique using a screen printing machine. Samples printed with solvent-based formulations were thermally cured and samples printed with water-based formulations were cured using a UV curing machine equipped with gallium and mercury (Ga/Hg) lamps at room temperature. The crock fastness values of samples printed with solvent-based formulations showed that increase in pigment concentration was not effective on both dry and wet crock fastness values. On the other hand, in samples printed with UV curable water-based formulations, dry crock fastness was improved and evaluated as very good for all pigment concentrations. However, increasing the pigment concentration affected the wet crock fastness values adversely and lower values were observed. As the energy level increased for each irradiation source, the fastness values were improved. In comparison with samples printed with solvent-based formulations, samples printed with UV curable water-based formulations yielded higher K/S values at all pigment concentrations. The results suggested that, higher K/S values can be obtained in samples printed with UV curable water-based formulations at a lower pigment concentration compared to samples printed with solvent-based formulations.
Physics of mind: Experimental confirmations of theoretical predictions.
Schoeller, Félix; Perlovsky, Leonid; Arseniev, Dmitry
2018-02-02
What is common among Newtonian mechanics, statistical physics, thermodynamics, quantum physics, the theory of relativity, astrophysics and the theory of superstrings? All these areas of physics have in common a methodology, which is discussed in the first few lines of the review. Is a physics of the mind possible? Is it possible to describe how a mind adapts in real time to changes in the physical world through a theory based on a few basic laws? From perception and elementary cognition to emotions and abstract ideas allowing high-level cognition and executive functioning, at nearly all levels of study, the mind shows variability and uncertainties. Is it possible to turn psychology and neuroscience into so-called "hard" sciences? This review discusses several established first principles for the description of mind and their mathematical formulations. A mathematical model of mind is derived from these principles. This model includes mechanisms of instincts, emotions, behavior, cognition, concepts, language, intuitions, and imagination. We clarify fundamental notions such as the opposition between the conscious and the unconscious, the knowledge instinct and aesthetic emotions, as well as humans' universal abilities for symbols and meaning. In particular, the review discusses in length evolutionary and cognitive functions of aesthetic emotions and musical emotions. Several theoretical predictions are derived from the model, some of which have been experimentally confirmed. These empirical results are summarized and we introduce new theoretical developments. Several unsolved theoretical problems are proposed, as well as new experimental challenges for future research. Copyright © 2017. Published by Elsevier B.V.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
NASA Astrophysics Data System (ADS)
Gomez-Diaz, Juan Sebastian
This PhD. dissertation presents a multidisciplinary work, which involves the development of different novel formulations applied to the accurate and efficient analysis of a wide variety of new structures, devices, and phenomena at themicrowave frequency region. The objectives of the present work can be divided into three main research lines: (1) The first research line is devoted to the Green's function analysis of multilayered enclosures with convex arbitrarily-shaped cross section. For this purpose, three accurate spatial-domain formulations are developed at the Green's functions level. These techniques are then efficiently incorporated into a mixed-potential integral equation framework, which allows the fast and accurate analysis of multilayered printed circuits in shielded enclosures. The study of multilayered shielded circuits has lead to the development of the novel hybridwaveguide-microstrip filter technology, which is light, compact, low-loss and presents important advantages for the space industry. (2) The second research line is related to the impulse-regime study ofmetamaterial-based composite right/left-handed (CRLH) structures and the subsequent theoretical and practical demonstration of several novel optically-inspired phenomena and applications at microwaves, in both, the guided and the radiative region. This study allows the development of new devices for ultra wide band and high data-rate communications systems. Besides, this research line also deals with the simple and accurate characterization of CRLH leaky-wave antennas using transmission line theory. (3) The third and last research line presents a novel CRLH parallel-plate waveguide leaky-wave antenna structure, and a rigorous iterative modal-based technique for its fast and complete characterization, including a systematic calculation of the antenna physical dimensions. It is important to point out that all the theoretical developments and novel structures presented in thiswork have been numerically confirmed, by the use of both, home-made software and commercial full-wave simulations, and experimentally verified, by the use of measurements from fabricated prototypes.
NASA Astrophysics Data System (ADS)
Raj, Indu; Mozetic, Miran; Jayachandran, V. P.; Jose, Jiya; Thomas, Sabu; Kalarikkal, Nandakumar
2018-07-01
Antimicrobial, antibiofilm adherent, fracture resistant nano zinc oxide (ZnO NP) formulations based on poly methyl methacrylate (PMMA) matrix were developed using a facile ex situ compression moulding technique. These formulations demonstrated potent, long-term biofilm-resisting effects against Candida albicans (9000 CFU to 1000 CFU) and Streptococcus mutans. Proposed mechanism of biofilm resistance was the release of metallic ions/metal oxide by ‘particle-corrosion’. MTT and cellular proliferation assays confirmed both qualitatively and quantitatively equal human skin fibroblast cell line proliferations (approximately 75%) on both PMMA/ZnO formulation and neat PMMA. Mechanical performance was evaluated over a range of filler loading, and theoretical models derived from Einstein, Guth, Thomas and Quemade were chosen to predict the modulus of the nanoformulations. All the models gave better fitting at lower filler content, which could be due to restricted mobility of the polymer chains by the constrained zone/interfacial rigid amorphous zone and also due to stress absorption by the highly energized NPs. Fracture mechanics were clearly described based on substantial experimental evidence surrounding crack prevention in the initial zones of fracture. Filler‑polymer interactions at the morphological and structural levels were elucidated through FTIR, XRD, SEM, TEM and AFM analyses. Major clinical challenges in cancer patient rehabilitation and routine denture therapy are frequent breakage of the prostheses and microbial colonization on the prostheses/tissues. In the present study, we succeeded in developing an antimicrobial, mechanically improved fracture resistant, biocompatible nanoformulation in a facile manner without the bio-toxic effects of surface modifiers/functionalization. This PMMA/ZnO nanoformulation could serve as a cost effective breakthrough biomaterial in the field of prosthetic rehabilitation and local drug delivery scaffolds for abused tissues.
Beig, Avital; Agbaria, Riad; Dahan, Arik
2013-01-01
The purpose of this study was to investigate the impact of oral cyclodextrin-based formulation on both the apparent solubility and intestinal permeability of lipophilic drugs. The apparent solubility of the lipophilic drug dexamethasone was measured in the presence of various HPβCD levels. The drug’s permeability was measured in the absence vs. presence of HPβCD in the rat intestinal perfusion model, and across Caco-2 cell monolayers. The role of the unstirred water layer (UWL) in dexamethasone’s absorption was studied, and a simplified mass-transport analysis was developed to describe the solubility-permeability interplay. The PAMPA permeability of dexamethasone was measured in the presence of various HPβCD levels, and the correlation with the theoretical predictions was evaluated. While the solubility of dexamethasone was greatly enhanced by the presence of HPβCD (K1∶1 = 2311 M−1), all experimental models showed that the drug’s permeability was significantly reduced following the cyclodextrin complexation. The UWL was found to have no impact on the absorption of dexamethasone. A mass transport analysis was employed to describe the solubility-permeability interplay. The model enabled excellent quantitative prediction of dexamethasone’s permeability as a function of the HPβCD level. This work demonstrates that when using cyclodextrins in solubility-enabling formulations, a tradeoff exists between solubility increase and permeability decrease that must not be overlooked. This tradeoff was found to be independent of the unstirred water layer. The transport model presented here can aid in striking the appropriate solubility-permeability balance in order to achieve optimal overall absorption. PMID:23874557
f(T) teleparallel gravity and cosmology.
Cai, Yi-Fu; Capozziello, Salvatore; De Laurentis, Mariafelicia; Saridakis, Emmanuel N
2016-10-01
Over recent decades, the role of torsion in gravity has been extensively investigated along the main direction of bringing gravity closer to its gauge formulation and incorporating spin in a geometric description. Here we review various torsional constructions, from teleparallel, to Einstein-Cartan, and metric-affine gauge theories, resulting in extending torsional gravity in the paradigm of f (T) gravity, where f (T) is an arbitrary function of the torsion scalar. Based on this theory, we further review the corresponding cosmological and astrophysical applications. In particular, we study cosmological solutions arising from f (T) gravity, both at the background and perturbation levels, in different eras along the cosmic expansion. The f (T) gravity construction can provide a theoretical interpretation of the late-time universe acceleration, alternative to a cosmological constant, and it can easily accommodate with the regular thermal expanding history including the radiation and cold dark matter dominated phases. Furthermore, if one traces back to very early times, for a certain class of f (T) models, a sufficiently long period of inflation can be achieved and hence can be investigated by cosmic microwave background observations-or, alternatively, the Big Bang singularity can be avoided at even earlier moments due to the appearance of non-singular bounces. Various observational constraints, especially the bounds coming from the large-scale structure data in the case of f (T) cosmology, as well as the behavior of gravitational waves, are described in detail. Moreover, the spherically symmetric and black hole solutions of the theory are reviewed. Additionally, we discuss various extensions of the f (T) paradigm. Finally, we consider the relation with other modified gravitational theories, such as those based on curvature, like f (R) gravity, trying to illuminate the subject of which formulation, or combination of formulations, might be more suitable for quantization ventures and cosmological applications.
Raj, Indu; Mozetic, Miran; Jayachandran, V P; Jose, Jiya; Thomas, Sabu; Kalarikkal, Nandakumar
2018-07-27
Antimicrobial, antibiofilm adherent, fracture resistant nano zinc oxide (ZnO NP) formulations based on poly methyl methacrylate (PMMA) matrix were developed using a facile ex situ compression moulding technique. These formulations demonstrated potent, long-term biofilm-resisting effects against Candida albicans (9000 CFU to 1000 CFU) and Streptococcus mutans. Proposed mechanism of biofilm resistance was the release of metallic ions/metal oxide by 'particle-corrosion'. MTT and cellular proliferation assays confirmed both qualitatively and quantitatively equal human skin fibroblast cell line proliferations (approximately 75%) on both PMMA/ZnO formulation and neat PMMA. Mechanical performance was evaluated over a range of filler loading, and theoretical models derived from Einstein, Guth, Thomas and Quemade were chosen to predict the modulus of the nanoformulations. All the models gave better fitting at lower filler content, which could be due to restricted mobility of the polymer chains by the constrained zone/interfacial rigid amorphous zone and also due to stress absorption by the highly energized NPs. Fracture mechanics were clearly described based on substantial experimental evidence surrounding crack prevention in the initial zones of fracture. Filler-polymer interactions at the morphological and structural levels were elucidated through FTIR, XRD, SEM, TEM and AFM analyses. Major clinical challenges in cancer patient rehabilitation and routine denture therapy are frequent breakage of the prostheses and microbial colonization on the prostheses/tissues. In the present study, we succeeded in developing an antimicrobial, mechanically improved fracture resistant, biocompatible nanoformulation in a facile manner without the bio-toxic effects of surface modifiers/functionalization. This PMMA/ZnO nanoformulation could serve as a cost effective breakthrough biomaterial in the field of prosthetic rehabilitation and local drug delivery scaffolds for abused tissues.
On the Ice Nucleation Spectrum
NASA Technical Reports Server (NTRS)
Barahona, D.
2012-01-01
This work presents a novel formulation of the ice nucleation spectrum, i.e. the function relating the ice crystal concentration to cloud formation conditions and aerosol properties. The new formulation is physically-based and explicitly accounts for the dependency of the ice crystal concentration on temperature, supersaturation, cooling rate, and particle size, surface area and composition. This is achieved by introducing the concepts of ice nucleation coefficient (the number of ice germs present in a particle) and nucleation probability dispersion function (the distribution of ice nucleation coefficients within the aerosol population). The new formulation is used to generate ice nucleation parameterizations for the homogeneous freezing of cloud droplets and the heterogeneous deposition ice nucleation on dust and soot ice nuclei. For homogeneous freezing, it was found that by increasing the dispersion in the droplet volume distribution the fraction of supercooled droplets in the population increases. For heterogeneous ice nucleation the new formulation consistently describes singular and stochastic behavior within a single framework. Using a fundamentally stochastic approach, both cooling rate independence and constancy of the ice nucleation fraction over time, features typically associated with singular behavior, were reproduced. Analysis of the temporal dependency of the ice nucleation spectrum suggested that experimental methods that measure the ice nucleation fraction over few seconds would tend to underestimate the ice nuclei concentration. It is shown that inferring the aerosol heterogeneous ice nucleation properties from measurements of the onset supersaturation and temperature may carry significant error as the variability in ice nucleation properties within the aerosol population is not accounted for. This work provides a simple and rigorous ice nucleation framework where theoretical predictions, laboratory measurements and field campaign data can be reconciled, and that is suitable for application in atmospheric modeling studies.
pp ii Brain, behaviour and mathematics: Are we using the right approaches? [review article
NASA Astrophysics Data System (ADS)
Perez Velazquez, Jose Luis
2005-12-01
Mathematics are used in biological sciences mostly as a quantifying tool, for it is the science of numbers after all. There is a long-standing interest in the application of mathematical methods and concepts to neuroscience in attempts to decipher brain activity. While there has been a very wide use of mathematical/physical methodologies, less effort has been made to formulate a comprehensive and integrative theory of brain function. This review concentrates on recent developments, uses and abuses of mathematical formalisms and techniques that are being applied in brain research, particularly the current trend of using dynamical system theory to unravel the global, collective dynamics of brain activity. It is worth emphasising that the theoretician-neuroscientist, eager to apply mathematical analysis to neuronal recordings, has to consider carefully some crucial anatomo-physiological assumptions, that may not be as accurate as the specific methods require. On the other hand, the experimentalist neuro-physicist, with an inclination to implement mathematical thoughts in brain science, has to make an effort to comprehend the bases of the theoretical concepts that can be used as frameworks or as analysis methods of brain electrophysiological recordings, and to critically inspect the accuracy of the interpretations of the results based on the neurophysiological ground. It is hoped that this brief overview of anatomical and physiological presumptions and their relation to theoretical paradigms will help clarify some particular points of interest in current trends in brain science, and may provoke further reflections on how certain or uncertain it is to conceptualise brain function based on these theoretical frameworks, if the physiological and experimental constraints are not as accurate as the models prescribe.
The Role of Additional Pulses in Electropermeabilization Protocols
Suárez, Cecilia; Soba, Alejandro; Maglietti, Felipe; Olaiz, Nahuel; Marshall, Guillermo
2014-01-01
Electropermeabilization (EP) based protocols such as those applied in medicine, food processing or environmental management, are well established and widely used. The applied voltage, as well as tissue electric conductivity, are of utmost importance for assessing final electropermeabilized area and thus EP effectiveness. Experimental results from literature report that, under certain EP protocols, consecutive pulses increase tissue electric conductivity and even the permeabilization amount. Here we introduce a theoretical model that takes into account this effect in the application of an EP-based protocol, and its validation with experimental measurements. The theoretical model describes the electric field distribution by a nonlinear Laplace equation with a variable conductivity coefficient depending on the electric field, the temperature and the quantity of pulses, and the Penne's Bioheat equation for temperature variations. In the experiments, a vegetable tissue model (potato slice) is used for measuring electric currents and tissue electropermeabilized area in different EP protocols. Experimental measurements show that, during sequential pulses and keeping constant the applied voltage, the electric current density and the blackened (electropermeabilized) area increase. This behavior can only be attributed to a rise in the electric conductivity due to a higher number of pulses. Accordingly, we present a theoretical modeling of an EP protocol that predicts correctly the increment in the electric current density observed experimentally during the addition of pulses. The model also demonstrates that the electric current increase is due to a rise in the electric conductivity, in turn induced by temperature and pulse number, with no significant changes in the electric field distribution. The EP model introduced, based on a novel formulation of the electric conductivity, leads to a more realistic description of the EP phenomenon, hopefully providing more accurate predictions of treatment outcomes. PMID:25437512
MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.
2000-01-01
Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.
Nanoscale molecular communication networks: a game-theoretic perspective
NASA Astrophysics Data System (ADS)
Jiang, Chunxiao; Chen, Yan; Ray Liu, K. J.
2015-12-01
Currently, communication between nanomachines is an important topic for the development of novel devices. To implement a nanocommunication system, diffusion-based molecular communication is considered as a promising bio-inspired approach. Various technical issues about molecular communications, including channel capacity, noise and interference, and modulation and coding, have been studied in the literature, while the resource allocation problem among multiple nanomachines has not been well investigated, which is a very important issue since all the nanomachines share the same propagation medium. Considering the limited computation capability of nanomachines and the expensive information exchange cost among them, in this paper, we propose a game-theoretic framework for distributed resource allocation in nanoscale molecular communication systems. We first analyze the inter-symbol and inter-user interference, as well as bit error rate performance, in the molecular communication system. Based on the interference analysis, we formulate the resource allocation problem as a non-cooperative molecule emission control game, where the Nash equilibrium is found and proved to be unique. In order to improve the system efficiency while guaranteeing fairness, we further model the resource allocation problem using a cooperative game based on the Nash bargaining solution, which is proved to be proportionally fair. Simulation results show that the Nash bargaining solution can effectively ensure fairness among multiple nanomachines while achieving comparable social welfare performance with the centralized scheme.
Life cycle, individual thrift, and the wealth of nations.
Modigliani, F
1986-11-07
One theory of the determinants of individual and national thrift has come to be known as the life cycle hypothesis of saving. The state of the art on the eve of the formulation of the hypothesis some 30 years ago is reviewed. Then the theoretical foundations of the model in its original formulation and later amendment are set forth, calling attention to various implications, some distinctive to it and some counterintuitive. A number of crucial empirical tests, both at the individual and the aggregate level, are presented as well as some applications of the life cycle hypothesis of saving to current policy issues.
du Plessis, Lissinda H; Marais, Etienne B; Mohammed, Faruq; Kotzé, Awie F
2014-01-01
In the last decades several new biotechnologically-based therapeutics have been developed due to progress in genetic engineering. A growing challenge facing pharmaceutical scientists is formulating these compounds into oral dosage forms with adequate bioavailability. An increasingly popular approach to formulate biotechnology-based therapeutics is the use of lipid based formulation technologies. This review highlights the importance of lipid based drug delivery systems in the formulation of oral biotechnology based therapeutics including peptides, proteins, DNA, siRNA and vaccines. The different production procedures used to achieve high encapsulation efficiencies of the bioactives are discussed, as well as the factors influencing the choice of excipient. Lipid based colloidal drug delivery systems including liposomes and solid lipid nanoparticles are reviewed with a focus on recent advances and updates. We further describe microemulsions and self-emulsifying drug delivery systems and recent findings on bioactive delivery. We conclude the review with a few examples on novel lipid based formulation technologies.
Control of chemical dynamics by lasers: theoretical considerations.
Kondorskiy, Alexey; Nanbu, Shinkoh; Teranishi, Yoshiaki; Nakamura, Hiroki
2010-06-03
Theoretical ideas are proposed for laser control of chemical dynamics. There are the following three elementary processes in chemical dynamics: (i) motion of the wave packet on a single adiabatic potential energy surface, (ii) excitation/de-excitation or pump/dump of wave packet, and (iii) nonadiabatic transitions at conical intersections of potential energy surfaces. A variety of chemical dynamics can be controlled, if we can control these three elementary processes as we desire. For (i) we have formulated the semiclassical guided optimal control theory, which can be applied to multidimensional real systems. The quadratic or periodic frequency chirping method can achieve process (ii) with high efficiency close to 100%. Concerning process (iii) mentioned above, the directed momentum method, in which a predetermined momentum vector is given to the initial wave packet, makes it possible to enhance the desired transitions at conical intersections. In addition to these three processes, the intriguing phenomenon of complete reflection in the nonadiabatic-tunneling-type of potential curve crossing can also be used to control a certain class of chemical dynamics. The basic ideas and theoretical formulations are provided for the above-mentioned processes. To demonstrate the effectiveness of these controlling methods, numerical examples are shown by taking the following processes: (a) vibrational photoisomerization of HCN, (b) selective and complete excitation of the fine structure levels of K and Cs atoms, (c) photoconversion of cyclohexadiene to hexatriene, and (d) photodissociation of OHCl to O + HCl.
[Application of β-cyclodextrin in the formulation of ODT tablets containing ibuprofen].
Zimmer, Łukasz; Kasperek, Regina; Poleszak, Ewa
2014-01-01
Oral disintegrating tablet (ODT) dissolves or disintegrates in saliva and then it is swallowed. Diluent in direct compression formulation has a dual role: it increases bulk of the dosage form and it promotes binding of the constituent particles of the formulation. Hence, selection of diluent is important in tablets produced by direct compression method. The aim of this work was to exame feasibility of preparing and optimizing oral disintegrating tablet formulation using β-cyclodextrin as a diluent. 400 mg round tablets were prepared by direct compression method on single punch tablet press using flat plain-face. 60% β-CD and MCC (microcrystalline cellulose - MCC-Vivapur 102) were used at different proportions for all the formulations. 5% of Kollidon CL was added as superdisintegrant. The eight formulations prepared were assessed for weight variation, thickness, disintegration time, hardness and dissolution rate according to FP IX. A dissolution test was performed at 37ºC using the paddle method at 50 rpm with 900 mL phosphate buffer (pH 6.8) as a dissolution medium. The content of ibuprofen sodium was found inside the ± 5% of the theoretical value. Hardness values of presented tablets were in the range 0.11-0.15 kG/mm2. Friability of the tablets lower than 1% indicates that the developed formulations can be processed and handled without excessive care. Disintegration time was in the range of 86 to 161 s. The results confirm the good mechanical properties of tablets containing β-CD. A composition with 20% β-CD and 40% MCC fulfilled a maximum requisite of an optimum formulation. These properties were similar to Ludiflash, the formulation used for comparison purposes. In the present study, higher concentration of β cyclodextrin was found to improve the hardness of tablets without increasing the disintegration time.
Improving the Critic Learning for Event-Based Nonlinear $H_{\\infty }$ Control Design.
Wang, Ding; He, Haibo; Liu, Derong
2017-10-01
In this paper, we aim at improving the critic learning criterion to cope with the event-based nonlinear H ∞ state feedback control design. First of all, the H ∞ control problem is regarded as a two-player zero-sum game and the adaptive critic mechanism is used to achieve the minimax optimization under event-based environment. Then, based on an improved updating rule, the event-based optimal control law and the time-based worst-case disturbance law are obtained approximately by training a single critic neural network. The initial stabilizing control is no longer required during the implementation process of the new algorithm. Next, the closed-loop system is formulated as an impulsive model and its stability issue is handled by incorporating the improved learning criterion. The infamous Zeno behavior of the present event-based design is also avoided through theoretical analysis on the lower bound of the minimal intersample time. Finally, the applications to an aircraft dynamics and a robot arm plant are carried out to verify the efficient performance of the present novel design method.
Formalization of Generalized Constraint Language: A Crucial Prelude to Computing With Words.
Khorasani, Elham S; Rahimi, Shahram; Calvert, Wesley
2013-02-01
The generalized constraint language (GCL), introduced by Zadeh, serves as a basis for computing with words (CW). It provides an agenda to express the imprecise and fuzzy information embedded in natural language and allows reasoning with perceptions. Despite its fundamental role, the definition of GCL has remained informal since its introduction by Zadeh, and to our knowledge, no attempt has been made to formulate a rigorous theoretical framework for GCL. Such formalization is necessary for further theoretical and practical advancement of CW for two important reasons. First, it provides the underlying infrastructure for the development of useful inference patterns based on sound theories. Second, it determines the scope of GCL and hence facilitates the translation of natural language expressions into GCL. This paper is an attempt to step in this direction by providing a formal syntax together with a compositional semantics for GCL. A soundness theorem is defined, and Zadeh's deduction rules are proved to be valid in the defined semantics. Furthermore, a discussion is provided on how the proposed language may be used in practice.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions
NASA Astrophysics Data System (ADS)
Barré, J.; Carrillo, J. A.; Degond, P.; Peurichard, D.; Zatorska, E.
2018-02-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
Syed, Haroon Khalid; Liew, Kai Bin; Loh, Gabriel Onn Kit; Peh, Kok Khiang
2015-03-01
A stability-indicating HPLC-UV method for the determination of curcumin in Curcuma longa extract and emulsion was developed. The system suitability parameters, theoretical plates (N), tailing factor (T), capacity factor (K'), height equivalent of a theoretical plate (H) and resolution (Rs) were calculated. Stress degradation studies (acid, base, oxidation, heat and UV light) of curcumin were performed in emulsion. It was found that N>6500, T<1.1, K' was 2.68-3.75, HETP about 37 and Rs was 1.8. The method was linear from 2 to 200 μg/mL with a correlation coefficient of 0.9998. The intra-day precision and accuracy for curcumin were ⩽0.87% and ⩽2.0%, while the inter-day precision and accuracy values were ⩽2.1% and ⩽-1.92. Curcumin degraded in emulsion under acid, alkali and UV light. In conclusion, the stability-indicating method could be employed to determine curcumin in bulk and emulsions. Copyright © 2014 Elsevier Ltd. All rights reserved.
A finite element simulation of biological conversion processes in landfills.
Robeck, M; Ricken, T; Widmann, R
2011-04-01
Landfills are the most common way of waste disposal worldwide. Biological processes convert the organic material into an environmentally harmful landfill gas, which has an impact on the greenhouse effect. After the depositing of waste has been stopped, current conversion processes continue and emissions last for several decades and even up to 100years and longer. A good prediction of these processes is of high importance for landfill operators as well as for authorities, but suitable models for a realistic description of landfill processes are rather poor. In order to take the strong coupled conversion processes into account, a constitutive three-dimensional model based on the multiphase Theory of Porous Media (TPM) has been developed at the University of Duisburg-Essen. The theoretical formulations are implemented in the finite element code FEAP. With the presented calculation concept we are able to simulate the coupled processes that occur in an actual landfill. The model's theoretical background and the results of the simulations as well as the meantime successfully performed simulation of a real landfill body will be shown in the following. Copyright © 2010 Elsevier Ltd. All rights reserved.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions.
Barré, J; Carrillo, J A; Degond, P; Peurichard, D; Zatorska, E
2018-01-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
Photonic Molecule Lasers Revisited
NASA Astrophysics Data System (ADS)
Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J.
2014-05-01
Photonic molecules (PMs) formed by coupling two or more optical resonators are ideal candidates for the fabrication of integrated microlasers, photonic molecule lasers. Whereas most calculations on PM lasers have been based on cold-cavity (passive) modes, i.e. quasi-bound states, a recently formulated steady-state ab initio laser theory (SALT) offers the possibility to take into account the spectral properties of the underlying gain transition, its position and linewidth, as well as incorporating an arbitrary pump profile. We will combine two theoretical approaches to characterize the lasing properties of PM lasers: for two-dimensional systems, the generalized Lorenz-Mie theory will obtain the resonant modes of the coupled molecules in an active medium described by SALT. Not only is then the theoretical description more complete, the use of an active medium provides additional parameters to control, engineer and harness the lasing properties of PM lasers for ultra-low threshold and directional single-mode emission. We will extend our recent study and present new results for a number of promising geometries. The authors acknowledge financial support from NSERC (Canada) and the CERC in Photonic Innovations of Y. Messaddeq.
Theoretical Technology Research for the International Solar Terrestrial Physics (ISTP) Program
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, Maha; Curtis, Steve (Technical Monitor)
2002-01-01
During the last four years the UCLA (University of California, Los Angeles) IGPP (Institute of Geophysics and Planetary Physics) Space Plasma Simulation Group has continued its theoretical effort to develop a Mission Oriented Theory (MOT) for the International Solar Terrestrial Physics (ISTP) program. This effort has been based on a combination of approaches: analytical theory, large-scale kinetic (LSK) calculations, global magnetohydrodynamic (MHD) simulations and self-consistent plasma kinetic (SCK) simulations. These models have been used to formulate a global interpretation of local measurements made by the ISTP spacecraft. The regions of applications of the MOT cover most of the magnetosphere: solar wind, low- and high- latitude magnetospheric boundary, near-Earth and distant magnetotail, and auroral region. Most recent investigations include: plasma processes in the electron foreshock, response of the magnetospheric cusp, particle entry in the magnetosphere, sources of observed distribution functions in the magnetotail, transport of oxygen ions, self-consistent evolution of the magnetotail, substorm studies, effects of explosive reconnection, and auroral acceleration simulations. A complete list of the activities completed under the grant follow.
Validating the Mexican American Intergenerational Caregiving Model
ERIC Educational Resources Information Center
Escandon, Socorro
2011-01-01
The purpose of this study was to substantiate and further develop a previously formulated conceptual model of Role Acceptance in Mexican American family caregivers by exploring the theoretical strengths of the model. The sample consisted of women older than 21 years of age who self-identified as Hispanic, were related through consanguinal or…
Adult Learners' Emotions in Online Learning
ERIC Educational Resources Information Center
Zembylas, Michalinos
2008-01-01
The aim of the research study reported in this article was to investigate how adult learners talk about their emotions in the context of a year-long online course, the first online course these adults take, as part of a distance education program. The theoretical and methodological approach focused on formulating an account of how emotion…
Barriers to College Access for Latino/a Adolescents: A Comparison of Theoretical Frameworks
ERIC Educational Resources Information Center
Gonzalez, Laura M.
2015-01-01
A comprehensive description of barriers to college access for Latino/a adolescents is an important step toward improving educational outcomes. However, relevant scholarship on barriers has not been synthesized in a way that promotes coherent formulation of intervention strategies or constructive scholarly discussion. The goal of this article is to…
Archive and Database as Metaphor: Theorizing the Historical Record
ERIC Educational Resources Information Center
Manoff, Marlene
2010-01-01
Digital media increase the visibility and presence of the past while also reshaping our sense of history. We have extraordinary access to digital versions of books, journals, film, television, music, art and popular culture from earlier eras. New theoretical formulations of database and archive provide ways to think creatively about these changes…
Dementia and Depression: A Process Model for Differential Diagnosis.
ERIC Educational Resources Information Center
Hill, Carrie L.; Spengler, Paul M.
1997-01-01
Delineates a process model for mental-health counselors to follow in formulating a differential diagnosis of dementia and depression in adults 65 years and older. The model is derived from empirical, theoretical, and clinical sources of evidence. Explores components of the clinical interview, of hypothesis formation, and of hypothesis testing.…
ERIC Educational Resources Information Center
Hansen, Janne Hedegaard
2012-01-01
In this article, I will argue that a theoretical identification of the limit to inclusion is needed in the conceptual identification of inclusion. On the one hand, inclusion is formulated as a vision that is, in principle, limitless. On the other hand, there seems to be an agreement that inclusion has a limit in the pedagogical practice. However,…
The Professional Negotiator: Role Conflict, Role Ambiguity and Motivation To Work.
ERIC Educational Resources Information Center
Medford, Robert E.; Miskel, Cecil
The investigation examined the relationship among role conflict, role ambiguity, and motivation to work of teacher-negotiators. The theoretical rationale for the study was formulated from the finding of Walton and McKersie, Deutsch, Vidmar and McGrath, and Blum concerning the negotiator's conflict with his adversary, his dependence on his…
ERIC Educational Resources Information Center
Nager, Nancy, Ed.; Shapiro, Edna K., Ed.
This book reviews the history of the developmental-interactive approach, a formulation rooted in developmental psychology and educational practice, progressively informing educational thinking since the early 20th century. The book describes and analyzes key assumptions and assesses the compatibility of new theoretical approaches, focuses on…
The Great Fallacy of the H Plus Ion and the True Nature of H30 Plus.
ERIC Educational Resources Information Center
Giguere, Paul A.
1979-01-01
Experimental and theoretical data are presented which verifies the existence of the hydronium ion. This existence was confirmed directly by x-ray and neutron diffraction in hydrochloric acid. Recommended is the abandonment of the erroneous hydrogen ion formulation and names such as proton hydrate. (BT)
Reforming Teacher Education through a Professionally Applied Study of Teaching
ERIC Educational Resources Information Center
Ure, Christine Leslie
2010-01-01
This paper presents a review of research of teacher education and the formulation of a model of teacher development that encompasses five domains of knowledge. The model provides a curriculum and pedagogical framework for initial teacher education that links together the theoretical, practical and professional elements of teaching and learning.…
An Analysis of the Community College Concept in the Socialist Republic of Viet Nam
ERIC Educational Resources Information Center
Epperson, Cynthia K.
2010-01-01
The purpose of this study was to discover if core characteristics exist forming a Vietnamese community college model and to determine if the characteristics would explain the model. This study utilized three theoretical orientations while reviewing the existing literature, formulating the research questions, examining the data and drawing…
ERIC Educational Resources Information Center
Benson, J. Kenneth; And Others
Interagency relationships have an important bearing upon the effectiveness with which public services are provided to disadvantaged populations. The present study examines interagency interactions and service delivery to the disadvantaged from both an empirical and a theoretical perspective. The findings may be helpful both in the formulation of…
A stochastic dynamic model for human error analysis in nuclear power plants
NASA Astrophysics Data System (ADS)
Delgado-Loperena, Dharma
Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.
Intermediate-energy nuclear chemistry workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, G.W.; Giesler, G.C.; Liu, L.C.
1981-05-01
This report contains the proceedings of the LAMPF Intermediate-Energy Nuclear Chemistry Workshop held in Los Alamos, New Mexico, June 23-27, 1980. The first two days of the Workshop were devoted to invited review talks highlighting current experimental and theoretical research activities in intermediate-energy nuclear chemistry and physics. Working panels representing major topic areas carried out indepth appraisals of present research and formulated recommendations for future research directions. The major topic areas were Pion-Nucleus Reactions, Nucleon-Nucleus Reactions and Nuclei Far from Stability, Mesonic Atoms, Exotic Interactions, New Theoretical Approaches, and New Experimental Techniques and New Nuclear Chemistry Facilities.
PCB intake from sport fishing along the Northern Illinois shore of Lake Michigan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pellettieri, M.B.; Hallenbeck, W.H.; Brenniman, G.R.
1996-12-31
Polychlorinated biphenyls (PCBs) are chlorinated hydrocarbons with an empirical formula of C{sub 12}H{sub 10-x}Cl{sub x}. The biphenyl can have from one to 10 chlorine substitutions resulting in 209 theoretical congeners. Commercial formulations of PCBs are complex mixtures of congeners; 125 congeners have been identified in commercial formulations. PCBs have entered the aquatic environment by industrial discharge, airborne deposition, and release from sediments. The most likely route of non-occupational human exposure to PCBs is from consumption of contaminated fish. PCBs are considered to be the most important contaminants in fish from the Great Lakes. Hence, in 1993 the Great Lakes Fishmore » and Advisory Task Force developed a fish consumption advisory for the Great Lakes which incorporated a Health Protection Value (HPV) of 3.5 {mu}g of PCBs/day. This study combines the creel species, weight, and length distribution data with PCB monitoring data to quantitate the theoretical intake of PCBs by sport fishermen in the Chicago area. 6 refs., 3 tabs.« less
Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.
Imbedded-Fracture Formulation of THMC Processes in Fractured Media
NASA Astrophysics Data System (ADS)
Yeh, G. T.; Tsai, C. H.; Sung, R.
2016-12-01
Fractured media consist of porous materials and fracture networks. There exist four approaches to mathematically formulating THMC (Thermal-Hydrology-Mechanics-Chemistry) processes models in the system: (1) Equivalent Porous Media, (2) Dual Porosity or Dual Continuum, (3) Heterogeneous Media, and (4) Discrete Fracture Network. The first approach cannot explicitly explore the interactions between porous materials and fracture networks. The second approach introduces too many extra parameters (namely, exchange coefficients) between two media. The third approach may make the problems too stiff because the order of material heterogeneity may be too much. The fourth approach ignore the interaction between porous materials and fracture networks. This talk presents an alternative approach in which fracture networks are modeled with a lower dimension than the surrounding porous materials. Theoretical derivation of mathematical formulations will be given. An example will be illustrated to show the feasibility of this approach.
NASA Astrophysics Data System (ADS)
El-Kelany, Kh. E.; Ravoux, C.; Desmarais, J. K.; Cortona, P.; Pan, Y.; Tse, J. S.; Erba, A.
2018-06-01
Lanthanide sesquioxides are strongly correlated materials characterized by highly localized unpaired electrons in the f band. Theoretical descriptions based on standard density functional theory (DFT) formulations are known to be unable to correctly describe their peculiar electronic and magnetic features. In this study, electronic and magnetic properties of the first four lanthanide sesquioxides in the series are characterized through a reliable description of spin localization as ensured by hybrid functionals of the DFT, which include a fraction of nonlocal Fock exchange. Because of the high localization of the f electrons, multiple metastable electronic configurations are possible for their ground state depending on the specific partial occupation of the f orbitals: the most stable configuration is here found and characterized for all systems. Magnetic ordering is explicitly investigated, and the higher stability of an antiferromagnetic configuration with respect to the ferromagnetic one is predicted. The critical role of the fraction of exchange on the description of their electronic properties (notably, on spin localization and on the electronic band gap) is addressed. In particular, a recently proposed theoretical approach based on a self-consistent definition—through the material dielectric response—of the optimal fraction of exchange in hybrid functionals is applied to these strongly correlated materials.
Exact comprehensive equations for the photon management properties of silicon nanowire
Li, Yingfeng; Li, Meicheng; Li, Ruike; Fu, Pengfei; Wang, Tai; Luo, Younan; Mbengue, Joseph Michel; Trevor, Mwenya
2016-01-01
Unique photon management (PM) properties of silicon nanowire (SiNW) make it an attractive building block for a host of nanowire photonic devices including photodetectors, chemical and gas sensors, waveguides, optical switches, solar cells, and lasers. However, the lack of efficient equations for the quantitative estimation of the SiNW’s PM properties limits the rational design of such devices. Herein, we establish comprehensive equations to evaluate several important performance features for the PM properties of SiNW, based on theoretical simulations. Firstly, the relationships between the resonant wavelengths (RW), where SiNW can harvest light most effectively, and the size of SiNW are formulized. Then, equations for the light-harvesting efficiency at RW, which determines the single-frequency performance limit of SiNW-based photonic devices, are established. Finally, equations for the light-harvesting efficiency of SiNW in full-spectrum, which are of great significance in photovoltaics, are established. Furthermore, using these equations, we have derived four extra formulas to estimate the optimal size of SiNW in light-harvesting. These equations can reproduce majority of the reported experimental and theoretical results with only ~5% error deviations. Our study fills up a gap in quantitatively predicting the SiNW’s PM properties, which will contribute significantly to its practical applications. PMID:27103087
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, S. M.
1991-01-01
The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, explicit forms for the corresponding material tangent stiffness tensors are developed, and these are valid for the entire deformation range; i.e., with both distinct as well as repeated principal-stretch values. Throughout the analysis the various implications of the underlying property of separability of the strain-energy functions are exploited, thus leading to compact final forms of the tensor expressions. In particular, this facilitated the treatment of complex cases of uncoupled volumetric/deviatoric formulations for incompressible materials. The forms derived are also amenable for use with symbolic-manipulation packages for systematic code generation.
Theoretical aspects of self-assembly of proteins: A Kirkwood-Buff-theory approach
NASA Astrophysics Data System (ADS)
Ben-Naim, Arieh
2013-06-01
A new approach to the problem of self-assembly of proteins induced by temperature, pressure, or changes in solute concentration is presented. The problem is formulated in terms of Le Chatelier principle, and a solution is sought in terms of the Kirkwood-Buff theory of solutions. In this article we focus on the pressure and solute effects on the association-dissociation equilibrium. We examine the role of both hydrophobic and hydrophilic effects. We argue that the latter are more important than the former. The solute effect, on the other hand, depends on the preferential solvation of the monomer and the aggregate with respect to solvent and co-solvent molecules. An experimental approach based on model compounds to study these effects is suggested.
Theoretical aspects of self-assembly of proteins: a Kirkwood-Buff-theory approach.
Ben-Naim, Arieh
2013-06-14
A new approach to the problem of self-assembly of proteins induced by temperature, pressure, or changes in solute concentration is presented. The problem is formulated in terms of Le Chatelier principle, and a solution is sought in terms of the Kirkwood-Buff theory of solutions. In this article we focus on the pressure and solute effects on the association-dissociation equilibrium. We examine the role of both hydrophobic and hydrophilic effects. We argue that the latter are more important than the former. The solute effect, on the other hand, depends on the preferential solvation of the monomer and the aggregate with respect to solvent and co-solvent molecules. An experimental approach based on model compounds to study these effects is suggested.
Forward multiple scattering corrections as function of detector field of view
NASA Astrophysics Data System (ADS)
Zardecki, A.; Deepak, A.
1983-06-01
The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.
Computational methods for the identification of spatially varying stiffness and damping in beams
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1986-01-01
A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.
Basis for paraxial surface-plasmon-polariton packets
NASA Astrophysics Data System (ADS)
Martinez-Herrero, Rosario; Manjavacas, Alejandro
2016-12-01
We present a theoretical framework for the study of surface-plasmon polariton (SPP) packets propagating along a lossy metal-dielectric interface within the paraxial approximation. Using a rigorous formulation based on the plane-wave spectrum formalism, we introduce a set of modes that constitute a complete basis set for the solutions of Maxwell's equations for a metal-dielectric interface in the paraxial approximation. The use of this set of modes allows us to fully analyze the evolution of the transversal structure of SPP packets beyond the single plane-wave approximation. As a paradigmatic example, we analyze the case of a Gaussian SPP mode, for which, exploiting the analogy with paraxial optical beams, we introduce a set of parameters that characterize its propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez-Moreno, P.; Instituto 'Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada; Zozor, S.
The Renyi and Shannon entropies are information-theoretic measures, which have enabled to formulate the position-momentum uncertainty principle in a much more adequate and stringent way than the (variance-based) Heisenberg-like relation. Moreover, they are closely related to various energetic density functionals of quantum systems. Here we derive upper bounds on these quantities in terms of the second-order moment
Design of helicopter rotor blades for optimum dynamic characteristics
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ko, T.; Korn, A. E.; Rossow, M. P.
1982-01-01
The possibilities and the limitations of tailoring blade mass and stiffness distributions to give an optimum blade design in terms of weight, inertia, and dynamic characteristics are investigated. Changes in mass or stiffness distribution used to place rotor frequencies at desired locations are determined. Theoretical limits to the amount of frequency shift are established. Realistic constraints on blade properties based on weight, mass moment of inertia size, strength, and stability are formulated. The extent hub loads can be minimized by proper choice of EL distribution is determined. Configurations that are simple enough to yield clear, fundamental insights into the structural mechanisms but which are sufficiently complex to result in a realistic result for an optimum rotor blade are emphasized.
NASA Astrophysics Data System (ADS)
Patel, Mrunali R.; Patel, Rashmin B.; Parikh, Jolly R.; Patel, Bharat G.
2016-04-01
Isotretinoin was formulated in novel microemulsion-based gel formulation with the aim of improving its solubility, skin tolerability, therapeutic efficacy, skin-targeting efficiency and patient compliance. Microemulsion was formulated by the spontaneous microemulsification method using 8 % isopropyl myristate, 24 % Labrasol, 8 % plurol oleique and 60 % water as an external phase. All plain and isotretinoin-loaded microemulsions were clear and showed physicochemical parameters for the desired topical delivery and stability. The permeation profiles of isotretinoin through rat skin from selected microemulsion formulation followed zero-order kinetics. Microemulsion-based gel was prepared by incorporating Carbopol®971 in optimized microemulsion formulation having suitable skin permeation rate and skin uptake. Microemulsion-based gel showed desired physicochemical parameters and demonstrated advantage over marketed formulation in improving the skin tolerability of isotretinoin, indicating its potential in improving topical delivery of isotretinoin. The developed microemulsion-based gel may be a potential drug delivery vehicle for targeted topical delivery of isotretinoin in the treatment of acne.
Graphene-Based Liquid-Gated Field Effect Transistor for Biosensing: Theory and Experiments
Reiner-Rozman, Ciril; Larisika, Melanie; Nowak, Christoph; Knoll, Wolfgang
2015-01-01
We present an experimental and theoretical characterization for reduced Graphene-Oxide (rGO) based FETs used for biosensing applications. The presented approach shows a complete result analysis and theoretically predictable electrical properties. The formulation was tested for the analysis of the device performance in the liquid gate mode of operation with variation of the ionic strength and pH-values of the electrolytes in contact with the FET. The dependence on the Debye length was confirmed experimentally and theoretically, utilizing the Debye length as a working parameter and thus defining the limits of applicability for the presented rGO-FETs. Furthermore, the FETs were tested for the sensing of biomolecules (bovine serum albumin (BSA) as reference) binding to gate-immobilized anti-BSA antibodies and analyzed using the Langmuir binding theory for the description of the equilibrium surface coverage as a function of the bulk (analyte) concentration. The obtained binding coefficients for BSA are found to be same as in results from literature, hence confirming the applicability of the devices. The FETs used in the experiments were fabricated using wet-chemically synthesized graphene, displaying high electron and hole mobility (μ) and provide the strong sensitivity also for low potential changes (by change of pH, ion concentration, or molecule adsorption). The binding coefficient for BSA-anti-BSA interaction shows a behavior corresponding to the Langmuir adsorption theory with a Limit of Detection (LOD) in the picomolar concentration range. The presented approach shows high reproducibility and sensitivity and a good agreement of the experimental results with the calculated data. PMID:25791463
NASA Astrophysics Data System (ADS)
Chatterjee, Sudip K.; Khan, Saba N.; Chaudhuri, Partha Roy
2014-12-01
An ultra-wide 1646 nm (1084-2730 nm), continuous-wave single pump parametric amplification spanning from near-infrared to short-wave infrared band (NIR-SWIR) in a host lead-silicate based binary multi-clad microstructure fiber (BMMF) is analyzed and reported. This ultra-broad band (widest reported to date) parametric amplification with gain more than 10 dB is theoretically achieved by a combination of low input pump power source ~7 W and a short-length of ~70 cm of nonlinear-BMMF through accurately engineered multi-order dispersion coefficients. A highly efficient theoretical formulation based on four-wave-mixing (FWM) is worked out to determine fiber's chromatic dispersion (D) profile which is used to optimise the gain-bandwidth and ripple of the parametric gain profile. It is seen that by appropriately controlling the higher-order dispersion coefficient (up-to sixth order), a great enhancement in the gain-bandwidth (2-3 times) can be achieved when operated very close to zero-dispersion wavelength (ZDW) in the anomalous dispersion regime. Moreover, the proposed theoretical model can predict the maximum realizable spectral width and the required pump-detuning (w.r.t ZDW) of any advanced complex microstructured fiber. Our thorough investigation of the wide variety of broadband gain spectra obtained as an integral part of this research work opens up the way for realizing amplification in the region (SWIR) located far from the pump (NIR) where good amplifiers currently do not exist.
Stability and Interaction of Coherent Structure in Supersonic Reactive Wakes
NASA Technical Reports Server (NTRS)
Menon, Suresh
1983-01-01
A theoretical formulation and analysis is presented for a study of the stability and interaction of coherent structure in reacting free shear layers. The physical problem under investigation is a premixed hydrogen-oxygen reacting shear layer in the wake of a thin flat plate. The coherent structure is modeled as a periodic disturbance and its stability is determined by the application of linearized hydrodynamic stability theory which results in a generalized eigenvalue problem for reactive flows. Detailed stability analysis of the reactive wake for neutral, symmetrical and antisymmetrical disturbance is presented. Reactive stability criteria is shown to be quite different from classical non-reactive stability. The interaction between the mean flow, coherent structure and fine-scale turbulence is theoretically formulated using the von-Kaman integral technique. Both time-averaging and conditional phase averaging are necessary to separate the three types of motion. The resulting integro-differential equations can then be solved subject to initial conditions with appropriate shape functions. In the laminar flow transition region of interest, the spatial interaction between the mean motion and coherent structure is calculated for both non-reactive and reactive conditions and compared with experimental data wherever available. The fine-scale turbulent motion determined by the application of integral analysis to the fluctuation equations. Since at present this turbulence model is still untested, turbulence is modeled in the interaction problem by a simple algebraic eddy viscosity model. The applicability of the integral turbulence model formulated here is studied parametrically by integrating these equations for the simple case of self-similar mean motion with assumed shape functions. The effect of the motion of the coherent structure is studied and very good agreement is obtained with previous experimental and theoretical works for non-reactive flow. For the reactive case, lack of experimental data made direct comparison difficult. It was determined that the growth rate of the disturbance amplitude is lower for reactive case. The results indicate that the reactive flow stability is in qualitative agreement with experimental observation.
Global Langevin model of multidimensional biomolecular dynamics.
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-14
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F(). To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F(), which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
Prashanth, N. S.; Marchal, Bruno; Kegels, Guy; Criel, Bart
2014-01-01
Performance of local health services managers at district level is crucial to ensure that health services are of good quality and cater to the health needs of the population in the area. In many low- and middle-income countries, health services managers are poorly equipped with public health management capacities needed for planning and managing their local health system. In the south Indian Tumkur district, a consortium of five non-governmental organizations partnered with the state government to organize a capacity-building program for health managers. The program consisted of a mix of periodic contact classes, mentoring and assignments and was spread over 30 months. In this paper, we develop a theoretical framework in the form of a refined program theory to understand how such a capacity-building program could bring about organizational change. A well-formulated program theory enables an understanding of how interventions could bring about improvements and an evaluation of the intervention. In the refined program theory of the intervention, we identified various factors at individual, institutional, and environmental levels that could interact with the hypothesized mechanisms of organizational change, such as staff’s perceived self-efficacy and commitment to their organizations. Based on this program theory, we formulated context–mechanism–outcome configurations that can be used to evaluate the intervention and, more specifically, to understand what worked, for whom and under what conditions. We discuss the application of program theory development in conducting a realist evaluation. Realist evaluation embraces principles of systems thinking by providing a method for understanding how elements of the system interact with one another in producing a given outcome. PMID:25121081
Prashanth, N S; Marchal, Bruno; Kegels, Guy; Criel, Bart
2014-01-01
Performance of local health services managers at district level is crucial to ensure that health services are of good quality and cater to the health needs of the population in the area. In many low- and middle-income countries, health services managers are poorly equipped with public health management capacities needed for planning and managing their local health system. In the south Indian Tumkur district, a consortium of five non-governmental organizations partnered with the state government to organize a capacity-building program for health managers. The program consisted of a mix of periodic contact classes, mentoring and assignments and was spread over 30 months. In this paper, we develop a theoretical framework in the form of a refined program theory to understand how such a capacity-building program could bring about organizational change. A well-formulated program theory enables an understanding of how interventions could bring about improvements and an evaluation of the intervention. In the refined program theory of the intervention, we identified various factors at individual, institutional, and environmental levels that could interact with the hypothesized mechanisms of organizational change, such as staff's perceived self-efficacy and commitment to their organizations. Based on this program theory, we formulated context-mechanism-outcome configurations that can be used to evaluate the intervention and, more specifically, to understand what worked, for whom and under what conditions. We discuss the application of program theory development in conducting a realist evaluation. Realist evaluation embraces principles of systems thinking by providing a method for understanding how elements of the system interact with one another in producing a given outcome.
Global Langevin model of multidimensional biomolecular dynamics
NASA Astrophysics Data System (ADS)
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-01
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F ( 𝒙 ) . To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F ( 𝒙 ) , which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
The family living the child recovery process after hospital discharge.
Pinto, Júlia Peres; Mandetta, Myriam Aparecida; Ribeiro, Circéa Amalia
2015-01-01
to understand the meaning attributed by the family to its experience in the recovery process of a child affected by an acute disease after discharge, and to develop a theoretical model of this experience. Symbolic interactionism was adopted as a theoretical reference, and grounded theory was adopted as a methodological reference. data were collected through interviews and participant observation with 11 families, totaling 15 interviews. A theoretical model consisting of two interactive phenomena was formulated from the analysis: Mobilizing to restore functional balance and Suffering from the possibility of a child's readmission. the family remains alert to identify early changes in the child's health, in an attempt to avoid rehospitalization. the effects of the disease and hospitalization continue to manifest in family functioning, causing suffering even after the child's discharge and recovery.
A critical review of the field application of a mathematical model of malaria eradication
Nájera, J. A.
1974-01-01
A malaria control field research trial in northern Nigeria was planned with the aid of a computer simulation based on Macdonald's mathematical model of malaria epidemiology. Antimalaria attack was based on a combination of mass drug administration (chloroquine and pyrimethamine) and DDT house spraying. The observed results were at great variance with the predictions of the model. The causes of these discrepancies included inadequate estimation of the model's basic variables, and overestimation, in planning the simulation, of the effects of the attack measures and of the degree of perfection attainable by their application. The discrepancies were to a great extent also due to deficiencies in the model. An analysis is made of those considered to be the most important. It is concluded that research efforts should be encouraged to increase our knowledge of the basic epidemiological factors, their variation and correlations, and to formulate more realistic and useful theoretical models. PMID:4156197
The ODD protocol: A review and first update
Grimm, Volker; Berger, Uta; DeAngelis, Donald L.; Polhill, J. Gary; Giske, Jarl; Railsback, Steve F.
2010-01-01
The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
Student Engagement: A Principle-Based Concept Analysis.
Bernard, Jean S
2015-08-04
A principle-based concept analysis of student engagement was used to examine the state of the science across disciplines. Four major perspectives of philosophy of science guided analysis and provided a framework for study of interrelationships and integration of conceptual components which then resulted in formulation of a theoretical definition. Findings revealed student engagement as a dynamic reiterative process marked by positive behavioral, cognitive, and affective elements exhibited in pursuit of deep learning. This process is influenced by a broader sociocultural environment bound by contextual preconditions of self-investment, motivation, and a valuing of learning. Outcomes of student engagement include satisfaction, sense of well-being, and personal development. Findings of this analysis prove relevant to nursing education as faculty transition from traditional teaching paradigms, incorporate learner-centered strategies, and adopt innovative pedagogical methodologies. It lends support for curricula reform, development of more accurate evaluative measures, and creation of meaningful teaching-learning environments within the discipline.
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Wang, R.
1990-01-01
This paper describes the results of application of three well known 3D magnetic vector potential (MVP) based finite element formulations for computation of magnetostatic fields in electrical devices. The three methods were identically applied to three practical examples, the first of which contains only one medium (free space), while the second and third examples contained a mix of free space and iron. The first of these methods is based on the unconstrained curl-curl of the MVP, while the second and third methods are predicated upon constraining the divergence of the MVP 10 zero (Coulomb's Gauge). It was found that the two latter methods cease to give useful and meaningful results when the global solution region contains a mix of media of high and low permeabilities. Furthermore, it was found that their results do not achieve the intended zero constraint on the divergence of the MVP.
The pair correlation function of krypton in the critical region: theory and experiment
NASA Astrophysics Data System (ADS)
Barocchi, F.; Chieux, P.; Fontana, R.; Magli, R.; Meroni, A.; Parola, A.; Reatto, L.; Tau, M.
1997-10-01
We present the results of high-precision measurements of the structure factor S(k) of krypton in the near-critical region of the liquid - vapour phase transition for values of k ranging from 1.5 up to 0953-8984/9/42/003/img15. The experimental results are compared with a theoretical calculation based on the hierarchical reference theory (HRT) with an accurate potential which includes two- and three-body contributions. The theory is based on a new implementation of HRT in which we avoid the use of hard spheres as a reference system. With this soft-core formulation we find a generally good agreement with experiments both at large k, where S(k) probes the short-range correlations, as well as at small k, where critical fluctuations become dominant. Also, for the density derivative of the pair correlation function there is an overall good agreement between theory and experiment.
Pharmaceutical Particle Engineering via Spray Drying
2007-01-01
This review covers recent developments in the area of particle engineering via spray drying. The last decade has seen a shift from empirical formulation efforts to an engineering approach based on a better understanding of particle formation in the spray drying process. Microparticles with nanoscale substructures can now be designed and their functionality has contributed significantly to stability and efficacy of the particulate dosage form. The review provides concepts and a theoretical framework for particle design calculations. It reviews experimental research into parameters that influence particle formation. A classification based on dimensionless numbers is presented that can be used to estimate how excipient properties in combination with process parameters influence the morphology of the engineered particles. A wide range of pharmaceutical application examples—low density particles, composite particles, microencapsulation, and glass stabilization—is discussed, with specific emphasis on the underlying particle formation mechanisms and design concepts. PMID:18040761
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
NASA Astrophysics Data System (ADS)
Ganesan, Venkat; Fredrickson, Glenn H.
The science and engineering of materials is entering a new era of so-called "designer materials", wherein, based upon the properties required for a particular application, a material is designed by exploiting the self-assembly of appropriately chosen molecular constituents [1]. The desirable and marketable properties of such materials, which include plastic alloys, block and graft copolymers, and polyelectrolyte solutions, complexes, and gels, depend critically on the ability to control and manipulate morphology by adjusting a combination of molecular and macroscopic variables. For example, styrenebutadiene block copolymers can be devised that serve either as rigid, tough, transparent thermoplastics or as soft, flexible, thermoplastic elastomers, by appropriate control of copolymer architecture and styrene/butadiene ratio. In this case, the property profiles are intimately connected to the extent and type of nanoscale self-assembly that is established within the material. One of the main challenges confronting the successful design of nano-structured polymers is the development of a basic understanding of the relationship between the molecular details of the polymer formulation and the morphology that is achieved. Unfortunately, such relationships are still mainly determined by trial and error experimentation. A purely experimental-based program in pursuit of this objective proves cumbersome — primarily, due to the broad parameter space accessible at the time of synthesis and formulation. Consequently, there is a significant motivation for the development of computational tools that can enable a rational exploration of the parameter space.
Boksa, Kevin; Otte, Andrew; Pinal, Rodolfo
2014-09-01
A novel method for the simultaneous production and formulation of pharmaceutical cocrystals, matrix-assisted cocrystallization (MAC), is presented. Hot-melt extrusion (HME) is used to create cocrystals by coprocessing the drug and coformer in the presence of a matrix material. Carbamazepine (CBZ), nicotinamide (NCT), and Soluplus were used as a model drug, coformer, and matrix, respectively. The MAC product containing 80:20 (w/w) cocrystal:matrix was characterized by differential scanning calorimetry, Fourier transform infrared spectroscopy, and powder X-ray diffraction. A partial least squares (PLS) regression model was developed for quantifying the efficiency of cocrystal formation. The MAC product was estimated to be 78% (w/w) cocrystal (theoretical 80%), with approximately 0.3% mixture of free (unreacted) CBZ and NCT, and 21.6% Soluplus (theoretical 20%) with the PLS model. A physical mixture (PM) of a reference cocrystal (RCC), prepared by precipitation from solution, and Soluplus resulted in faster dissolution relative to the pure RCC. However, the MAC product with the exact same composition resulted in considerably faster dissolution and higher maximum concentration (∼five-fold) than those of the PM. The MAC product consists of high-quality cocrystals embedded in a matrix. The processing aspect of MAC plays a major role on the faster dissolution observed. The MAC approach offers a scalable process, suitable for the continuous manufacturing and formulation of pharmaceutical cocrystals. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Gray, Meeghan E; Cameron, Elissa Z
2010-01-01
The efficacy of contraceptive treatments has been extensively tested, and several formulations are effective at reducing fertility in a range of species. However, these formulations should minimally impact the behavior of individuals and populations before a contraceptive is used for population manipulation, but these effects have received less attention. Potential side effects have been identified theoretically and we reviewed published studies that have investigated side effects on behavior and physiology of individuals or population-level effects, which provided mixed results. Physiological side effects were most prevalent. Most studies reported a lack of secondary effects, but were usually based on qualitative data or anecdotes. A meta-analysis on quantitative studies of side effects showed that secondary effects consistently occur across all categories and all contraceptive types. This contrasts with the qualitative studies, suggesting that anecdotal reports are insufficient to investigate secondary impacts of contraceptive treatment. We conclude that more research is needed to address fundamental questions about secondary effects of contraceptive treatment and experiments are fundamental to conclusions. In addition, researchers are missing a vital opportunity to use contraceptives as an experimental tool to test the influence of reproduction, sex and fertility on the behavior of wildlife species.
A Free Energy Principle for Biological Systems
Karl, Friston
2012-01-01
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology. PMID:23204829
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Yurkin, Maxim A.
2018-07-01
Although free space cannot generate electromagnetic waves, the majority of existing accounts of frequency-domain electromagnetic scattering by particles and particle groups are based on the postulate of existence of an impressed incident field, usually in the form of a plane wave. In this tutorial we discuss how to account for the actual existence of impressed source currents rather than impressed incident fields. Specifically, we outline a self-consistent theoretical formalism describing electromagnetic scattering by an arbitrary finite object in the presence of arbitrarily distributed impressed currents, some of which can be far removed from the object and some can reside in its vicinity, including inside the object. To make the resulting formalism applicable to a wide range of scattering-object morphologies, we use the framework of the volume integral equation formulation of electromagnetic scattering, couple it with the notion of the transition operator, and exploit the fundamental symmetry property of this operator. Among novel results, this tutorial includes a streamlined proof of fundamental symmetry (reciprocity) relations, a simplified derivation of the Foldy equations, and an explicit analytical expression for the transition operator of a multi-component scattering object.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
High Resolution, Large Deformation 3D Traction Force Microscopy
López-Fagundo, Cristina; Reichner, Jonathan; Hoffman-Kim, Diane; Franck, Christian
2014-01-01
Traction Force Microscopy (TFM) is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D) imaging and traction force analysis (3D TFM) have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients. PMID:24740435
Flores-Céspedes, Francisco; Martínez-Domínguez, Gerardo P; Villafranca-Sánchez, Matilde; Fernández-Pérez, Manuel
2015-09-30
The botanical insecticide azadirachtin was incorporated in alginate-based granules to obtain controlled release formulations (CRFs). The basic formulation [sodium alginate (1.47%) - azadirachtin (0.28%) - water] was modified by the addition of biosorbents, obtaining homogeneous hybrid hydrogels with high azadirachtin entrapment efficiency. The effect on azadirachtin release rate caused by the incorporation of biosorbents such as lignin, humic acid, and olive pomace in alginate formulation was studied by immersion of the granules in water under static conditions. The addition of the biosorbents to the basic alginate formulation reduces the rate of release because the lignin-based formulation produces a slower release. Photodegradation experiments showed the potential of the prepared formulations in protecting azadirachtin against simulated sunlight, thus improving its stability. The results showed that formulation prepared with lignin provided extended protection. Therefore, this study provides a new procedure to encapsulate the botanical insecticide azadirachtin, improving its delivery and photostability.
Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel
2010-12-21
How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.
On the Miller-Tucker-Zemlin Based Formulations for the Distance Constrained Vehicle Routing Problems
NASA Astrophysics Data System (ADS)
Kara, Imdat
2010-11-01
Vehicle Routing Problem (VRP), is an extension of the well known Traveling Salesman Problem (TSP) and has many practical applications in the fields of distribution and logistics. When the VRP consists of distance based constraints it is called Distance Constrained Vehicle Routing Problem (DVRP). However, the literature addressing on the DVRP is scarce. In this paper, existing two-indexed integer programming formulations, having Miller-Tucker-Zemlin based subtour elimination constraints, are reviewed. Existing formulations are simplified and obtained formulation is presented as formulation F1. It is shown that, the distance bounding constraints of the formulation F1, may not generate the distance traveled up to the related node. To do this, we redefine the auxiliary variables of the formulation and propose second formulation F2 with new and easy to use distance bounding constraints. Adaptation of the second formulation to the cases where new restrictions such as minimal distance traveled by each vehicle or other objectives such as minimizing the longest distance traveled is discussed.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Advances in development, scale-up and manufacturing of microbicide gels, films, and tablets.
Garg, Sanjay; Goldman, David; Krumme, Markus; Rohan, Lisa C; Smoot, Stuart; Friend, David R
2010-12-01
Vaginal HIV microbicides are topical, self administered products designed to prevent or significantly reduce transmission of HIV infection in women. The earliest microbicide candidates developed have been formulated as coitally dependent (used around the time of sex) gels and creams. All microbicide candidates tested in Phase III clinical trials, so far, have been gel products with non-specific mechanisms of action. However, recently, research is focusing on compounds containing highly potent and specific anti-retrovirals. These specific anti-retrovirals are being formulated as primary dosage forms such as vaginal gels or in alternative dosage forms such as fast dissolve films and tablets. Recent innovations also include development of combination products of highly active antiviral drugs such as reverse transcriptase inhibitors and entry inhibitors, which would theoretically be more effective and would reduce the possibility of drug resistance. In this article, an overview of recent advances in the microbicide gel, film, and tablet formulations and issues pertaining to scale-up, formulation, and evaluation challenges and regulatory guidelines have been presented. This article forms part of a special supplement covering presentations on gels, tablets, and films from the symposium on "Recent Trends in Microbicide Formulations" held on 25 and 26 January 2010, Arlington, VA. Copyright © 2010 Elsevier B.V. All rights reserved.
Efficacy and toxicological studies of cremophor EL free alternative paclitaxel formulation.
Utreja, Puneet; Jain, Subheet; Yadav, Subodh; Khandhuja, K L; Tiwary, A K
2011-11-01
In the present study, Cremophor EL free paclitaxel elastic liposomal formulation consisting of soya phosphatidylcholine and biosurfactant sodium deoxycholate was developed and optimized. The toxicological profile, antitumor efficacy and hemolytic toxicity of paclitaxel elastic liposomal formulation in comparison to Cremophor EL based marketed formulation were evaluated. Paclitaxel elastic liposomal formulations were prepared and characterized in vitro, ex-vivo and in vivo. Single dose toxicity study of paclitaxel elastic liposomal and marketed formulation was carried out in dose range of 10, 20, 40, 80, 120, 160 and 200 mg/kg. Cytotoxicity of developed formulation was evaluated using small cell lung cancer cell line (A549). Antitumor activity of developed formulation was compared with the marketed formulation using Cytoselect™ 96-well cell transformation assay. In vivo administration of paclitaxel elastic liposomal formulation into mice showed 6 fold increase in Maximum Tolerated Dose (MTD) in comparison to the marketed formulation. Similarly, LD50 (141.6 mg/kg) was also found to increase significantly than the marketed formulation (16.7 mg/kg). Result of antitumor assay revealed a high reduction of tumor density with paclitaxel elastic liposomal formulation. Reduction in hemolytic toxicity was also observed with paclitaxel elastic liposomal formulation in comparison to the marketed formulation. The carrier based approach for paclitaxel delivery demonstrated significant reduction in toxicity as compared to the Cremophor EL based marketed formulation following intra-peritoneal administration in mice model. The reduced toxicity and enhanced anti-cancer activity of elastic liposomal formulation strongly indicate its potential for safe and effective delivery of paclitaxel.
NASA Astrophysics Data System (ADS)
An, C.; Parker, G.; Ma, H.; Naito, K.; Moodie, A. J.; Fu, X.
2017-12-01
Models of river morphodynamics consist of three elements: (1) a treatment of flow hydraulics, (2) a formulation relating some aspect of sediment transport to flow hydraulics, and (3) a description of sediment conservation. In the case of unidirectional river flow, the Exner equation of sediment conservation is commonly described in terms of a flux-based formulation, in which bed elevation variation is related to the streamwise gradient of sediment transport rate. An alternate formulation of the Exner equation, however, is the entrainment-based formulation in which bed elevation variation is related to the difference between the entrainment rate of bed sediment into suspension and the deposition rate of suspended sediment onto the bed. In the flux-based formulation, sediment transport is regarded to be in a local equilibrium state (i.e., sediment transport rate locally equals sediment transport capacity). However, the entrainment-based formulation does not require this constraint; the sediment transport rate may lag in space and time behind the changing flow conditions. In modeling the fine-grained Lower Yellow River, it is usual to treat sediment conservation in terms of an entrainment-based (nonequilibrium) rather than a flux-based (equilibrium) formulation with the consideration that fine-grained sediment may be entrained at one place but deposited only at some distant location downstream. However, the differences in prediction between the two formulations are still not well known, and the entrainment formulation may not always be necessary for the Lower Yellow River. Here we study this problem by comparing the results of flux-based and entrainment-based morphodynamics under conditions typical of the Yellow River, using sediment transport equations specifically designed for the Lower Yellow River. We find, somewhat unexpectedly, that in a treatment of a 200-km reach using uniform sediment, there is little difference between the two formulations unless the sediment fall velocity is arbitrarily greatly reduced. A consideration of sediment mixtures, however, shows that the two formulations give very different patterns of grain sorting. We explain this in terms of the structures of the two Exner equations for sediment mixtures, and define conditions for applicability of each formulation.
Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia
2017-01-01
Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355
Classification of materials for conducting spheroids based on the first order polarization tensor
NASA Astrophysics Data System (ADS)
Khairuddin, TK Ahmad; Mohamad Yunos, N.; Aziz, ZA; Ahmad, T.; Lionheart, WRB
2017-09-01
Polarization tensor is an old terminology in mathematics and physics with many recent industrial applications including medical imaging, nondestructive testing and metal detection. In these applications, it is theoretically formulated based on the mathematical modelling either in electrics, electromagnetics or both. Generally, polarization tensor represents the perturbation in the electric or electromagnetic fields due to the presence of conducting objects and hence, it also desribes the objects. Understanding the properties of the polarization tensor is necessary and important in order to apply it. Therefore, in this study, when the conducting object is a spheroid, we show that the polarization tensor is positive-definite if and only if the conductivity of the object is greater than one. In contrast, we also prove that the polarization tensor is negative-definite if and only if the conductivity of the object is between zero and one. These features categorize the conductivity of the spheroid based on in its polarization tensor and can then help to classify the material of the spheroid.
Mutheeswaran, S; Pandikumar, P; Chellappandian, M; Ignacimuthu, S; Duraipandiyan, V; Logamanian, M
2014-04-11
Siddha system of traditional medicine has been practiced in Tamil Nadu. This system of medicine has a high number of non-institutionally trained practitioners but studies on their traditional medicinal knowledge are not adequate. The present study is aimed to document and analyze the sastric (traditional) formulations used by the non-institutionally trained siddha medical practitioners in Virudhunagar and Tirunelveli districts of Tamil Nadu, India. After obtaining prior informed consent, interviews were conducted with 115 non-institutionally trained siddha medical practitioners about the sastric formulations used by them for the treatment. Successive free listing method was adopted to collect the data and the data were analyzed by calculating Informant Consensus Factor (Fic) and Informant Agreement Ratio (IAR). The study documented data regarding 194 sastric formulations and they were classified into plant, mineral and animal based formulations. Quantitative analysis showed that 62.5% of the formulations were plant based, while the mineral based formulations had a high mean number of citations and versatile uses. Gastrointestinal (12.0%), kapha (11.3%) and dermatological (10.8%) ailments had a high percentage of citations. Jaundice had a high Fic value (0.750) followed by the dermatological ailments. The illness categories with high Fic values under each type of formulation were as follows: jaundice, aphrodisiac and urinary ailments (plant based); jaundice, cuts & wounds and dermatological ailments (mineral based); and hemorrhoids, kapha ailments and heart ailments (animal based formulations). The scientific studies conducted with important formulations under each illness category are discussed. The present study indicated the importance of some illnesses over the others and inclusion of new illnesses under each formulation. The ingredients used to prepare these formulations have shown varying degrees of scientific evidence; generally limited studies were available on the efficacy of them as formulations. Further in-depth studies on the formulations with high IAR value and Fic value of illness categories will be helpful to improve health status of the people. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Investigation of Flame Driving and Flow Turning in Axial Solid Rocket Instabilities
1993-08-31
theoretically sound. it is hard to cor.elate 0 first end order quantities with meassured experimental dR (16) data. Therefore, a naw theore"ia formulation...a porous plate that turning loss, an analysis was developed that allows for behaves as an acoustically ’ hard ’ termination. The the expansion of all
ERIC Educational Resources Information Center
Saltmarsh, Sue
2015-01-01
This paper draws on theoretical insights from Michel de Certeau to formulate a response to questions of whether, and in what ways, poststructural policy analysis can "transcend critique to offer potential grounds for alternative social and political strategies in education". The paper offers a discussion of how Certeau's concern with how…
ERIC Educational Resources Information Center
Moody-Ramirez, Mia; Scott, Lakia M.
2015-01-01
Using a feminist lens and a constructivist approach as the theoretical framework, we used rap lyrics and videos to help college students explore mass media's representation of the "independent" Black woman and the concept of "independence" in general. Students must be able to formulate their own concept of independence to…
A Theoretical Model of Segmented Youth Labor Markets and the School to Work Transition.
ERIC Educational Resources Information Center
Vrooman, John
Recurring evidence that workers with similar skills do not necessarily earn the same wages led to the formulation of an alternative to the conventional market theory, namely, the segmented market theory. This theory posits that certain skills are distributed not among prospective employees but among jobs, in relation to the technology of those…
Drugs and Addict Lifestyles. National Institute on Drug Abuse Research Issues 7.
ERIC Educational Resources Information Center
Ferguson, Patricia, Ed.; And Others
This report is the seventh in a series intended to summarize the empirical research findings and major theoretical approaches relating to the issues of drug use and abuse. This volume reviews the research undertaken to describe the lifestyle histories of heroin users. These research findings are formulated and detailed to provide the reader with…
ERIC Educational Resources Information Center
Franks, David D.; Marolla, Joseph
1976-01-01
A theoretical and operational rationale is presented for the development of multidimensional measures of self-esteem. Self-esteem is conceptualized as a function of two processes reflected appraisals of significant others in one's social environment in the form of social approval, and the individual's feelings of efficacy and competence derived…
A Methodology for Validation of High Resolution Combat Models
1988-06-01
TELEOLOGICAL PROBLEM ................................ 7 C. EPISTEMOLOGICAL PROBLEM ............................. 8 D. UNCERTAINTY PRINCIPLE...theoretical issues. "The Teleological Problem"--How a model by its nature formulates an explicit cause-and-effect relationship that excludes other...34experts" in establishing the standard for reality. Generalization from personal experience is often hampered by the parochial aspects of the
Examining the University-Profession Divide: An Inquiry into a Teacher Education Program's Practices
ERIC Educational Resources Information Center
Sivia, Awneet; MacMath, Sheryl
2016-01-01
This paper focuses on the divide between the university as a site of teacher education and the profession of practicing teachers. We employed a theoretical inquiry methodology on a singular case study which included formulating questions about the phenomena of the university-profession divide (UPD), analysing constituents of the UPD, and…
Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship
ERIC Educational Resources Information Center
Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.
2010-01-01
The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…
Teaching the Conceptual Scheme "The Particle Nature of Matter" in the Elementary School.
ERIC Educational Resources Information Center
Pella, Milton O.; And Others
Conclusions of an extensive project aimed to prepare lessons and associated materials related to teaching concepts included in the scheme "The Particle Nature of Matter" for grades two through six are presented. The hypothesis formulated for the project was that children in elementary schools can learn theoretical concepts related to the particle…
Understanding Hong Kong Business Teachers in Action: The Case of Formulation of Teaching Strategies
ERIC Educational Resources Information Center
Yu, Christina Wai Mui
2009-01-01
This article examines four categories of teaching strategy used in business classes by a group of 26 secondary school business teachers in Hong Kong, using grounded theoretical coding techniques in the analysis. Each of the teaching categories is illustrated with typical extracts from interviews and is discussed in relation to its effectiveness…
Earth's rotation in the framework of general relativity: rigid multipole moments
NASA Astrophysics Data System (ADS)
Klioner, S. A.; Soffel, M.; Xu, Ch.; Wu, X.
A set of equations describing the rotational motion of the Earth relative to the GCRS is formulated in the approximation of rigidly rotating multipoles. The external bodies are supposed to be mass monopoles. The derived set of formulas is supposed to form the theoretical basis for a practical post-Newtonian theory of Earth precession and nutation.
PUBLIC EDUCATION FOR DISTURBED CHILDREN IN NEW YORK CITY, APPLICATION AND THEORY.
ERIC Educational Resources Information Center
BERKOWITZ, PEARL H.; ROTHMAN, ESTHER P.
CONCERNED WITH PUBLIC EDUCATION FOR DISTURBED CHILDREN, VARIOUS AUTHORS DISCUSS PROGRAMS OF THE NEW YORK CITY PUBLIC SCHOOL SYSTEM AND PRESENT SOME THEORETICAL FORMULATIONS. PROGRAMS CONSIDERED ARE (1) "EDUCATING DISTURBED CHILDREN IN NEW YORK CITY--AN HISTORICAL OVERVIEW" BY PEARL H. BERKOWITZ AND ESTHER P. ROTHMAN, (2) "THESE ARE OUR CHILDREN"…
Shuttling between Worlds: Quandaries of Performing Queered Research in Asian American Contexts
ERIC Educational Resources Information Center
Varney, Joan
2008-01-01
This article explores how the tensions that grow out of being a researcher in my community of queer Asian Americans lead to the formulation of a different kind of ethnographic approach. A hybrid notion of identity can require and inform a hybrid or poststructural ethnographic practice. This hybridized research method draws upon theoretical strands…
NASA Astrophysics Data System (ADS)
Herrendoerfer, R.; van Dinther, Y.; Gerya, T.
2015-12-01
To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a fraction of characteristic slip distance over maximum slip velocity, we are able to resolve stick-slip events and increase computational speed. In this better resolved framework, we examine the role of aseismic slip on the megathrust cycle and its dependence on subduction velocity.
Furnes, Bodil; Dysvik, Elin
2010-01-01
Objective: Based on the present authors’ research and several approaches to grief related to loss by death and nonmalignant chronic pain, the paper suggests a new integrated theoretical framework for intervention in clinical settings. Methods: An open qualitative review of the literature on grief theories was performed searching for a new integrated approach in the phenomenological tradition. We then investigated the relationship between grief, loss and chronic nonmalignant pain, looking for main themes and connections and how these could be best understood in a more holistic manner. Results: Two main themes were formulated, “relearning the world” and “adaptation”. Between these themes a continuous movement emerged involving experience such as: “despair and hope”, “lack of understanding and insight”, “meaning disruption and increased meaning”, and “bodily discomfort and reintegrated body”. These were identified as paired subthemes. Conclusions: Grief as a distinctive experience means that health care must be aimed at each individual experience and situation. Grief experience and working with grief are considered in terms of relearning the world while walking backwards and living forwards, as described in our integrated model. We consider that this theoretical framework regarding grief should offer an integrated foundation for health care workers who are working with people experiencing grief caused by death or chronic pain. PMID:20622913
The nonstationary strain filter in elastography: Part I. Frequency dependent attenuation.
Varghese, T; Ophir, J
1997-01-01
The accuracy and precision of the strain estimates in elastography depend on a myriad number of factors. A clear understanding of the various factors (noise sources) that plague strain estimation is essential to obtain quality elastograms. The nonstationary variation in the performance of the strain filter due to frequency-dependent attenuation and lateral and elevational signal decorrelation are analyzed in this and the companion paper for the cross-correlation-based strain estimator. In this paper, we focus on the role of frequency-dependent attenuation in the performance of the strain estimator. The reduction in the signal-to-noise ratio (SNRs) in the RF signal, and the center frequency and bandwidth downshift with frequency-dependent attenuation are incorporated into the strain filter formulation. Both linear and nonlinear frequency dependence of attenuation are theoretically analyzed. Monte-Carlo simulations are used to corroborate the theoretically predicted results. Experimental results illustrate the deterioration in the precision of the strain estimates with depth in a uniformly elastic phantom. Theoretical, simulation and experimental results indicate the importance of high SNRs values in the RF signals, because the strain estimation sensitivity, elastographic SNRe and dynamic range deteriorate rapidly with a decrease in the SNRs. In addition, a shift in the strain filter toward higher strains is observed at large depths in tissue due to the center frequency downshift.
Ionic-Liquid-Based Paclitaxel Preparation: A New Potential Formulation for Cancer Treatment.
Chowdhury, Md Raihan; Moshikur, Rahman Md; Wakabayashi, Rie; Tahara, Yoshiro; Kamiya, Noriho; Moniruzzaman, Muhammad; Goto, Masahiro
2018-06-04
Paclitaxel (PTX) injection (i.e., Taxol) has been used as an effective chemotherapeutic treatment for various cancers. However, the current Taxol formulation contains Cremophor EL, which causes hypersensitivity reactions during intravenous administration and precipitation by aqueous dilution. This communication reports the preliminary results on the ionic liquid (IL)-based PTX formulations developed to address the aforementioned issues. The formulations were composed of PTX/cholinium amino acid ILs/ethanol/Tween-80/water. A significant enhancement in the solubility of PTX was observed with considerable correlation with the density and viscosity of the ILs, and with the side chain of the amino acids used as anions in the ILs. Moreover, the formulations were stable for up to 3 months. The driving force for the stability of the formulation was hypothesized to be the involvement of different types of interactions between the IL and PTX. In vitro cytotoxicity and antitumor activity of the IL-based formulations were evaluated on HeLa cells. The IL vehicles without PTX were found to be less cytotoxic than Taxol, while both the IL-based PTX formulation and Taxol exhibited similar antitumor activity. Finally, in vitro hypersensitivity reactions were evaluated on THP-1 cells and found to be significantly lower with the IL-based formulation than Taxol. This study demonstrated that specially designed ILs could provide a potentially safer alternative to Cremophor EL as an effective PTX formulation for cancer treatment giving fewer hypersensitivity reactions.
Equivalent formulations of “the equation of life”
NASA Astrophysics Data System (ADS)
Ao, Ping
2014-07-01
Motivated by progress in theoretical biology a recent proposal on a general and quantitative dynamical framework for nonequilibrium processes and dynamics of complex systems is briefly reviewed. It is nothing but the evolutionary process discovered by Charles Darwin and Alfred Wallace. Such general and structured dynamics may be tentatively named “the equation of life”. Three equivalent formulations are discussed, and it is also pointed out that such a quantitative dynamical framework leads naturally to the powerful Boltzmann-Gibbs distribution and the second law in physics. In this way, the equation of life provides a logically consistent foundation for thermodynamics. This view clarifies a particular outstanding problem and further suggests a unifying principle for physics and biology.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Establishing a theory for deuteron-induced surrogate reactions
NASA Astrophysics Data System (ADS)
Potel, G.; Nunes, F. M.; Thompson, I. J.
2015-09-01
Background: Deuteron-induced reactions serve as surrogates for neutron capture into compound states. Although these reactions are of great applicability, no theoretical efforts have been invested in this direction over the last decade. Purpose: The goal of this work is to establish on firm grounds a theory for deuteron-induced neutron-capture reactions. This includes formulating elastic and inelastic breakup in a consistent manner. Method: We describe this process both in post- and prior-form distorted wave Born approximation following previous works and discuss the differences in the formulation. While the convergence issues arising in the post formulation can be overcome in the prior formulation, in this case one still needs to take into account additional terms due to nonorthogonality. Results: We apply our method to the 93Nb(d ,p )X at Ed=15 and 25 MeV and are able to obtain a good description of the data. We look at the various partial wave contributions, as well as elastic versus inelastic contributions. We also connect our formulation with transfer to neutron bound states. Conclusions: Our calculations demonstrate that the nonorthogonality term arising in the prior formulation is significant and is at the heart of the long-standing controversy between the post and the prior formulations of the theory. We also show that the cross sections for these reactions are angular-momentum dependent and therefore the commonly used Weisskopf limit is inadequate. Finally, we make important predictions for the relative contributions of elastic breakup and nonelastic breakup and call for elastic-breakup measurements to further constrain our model.
Establishing a theory for deuteron induced surrogate reactions
Potel, G.; Nunes, F. M.; Thompson, I. J.
2015-09-18
Background: Deuteron-induced reactions serve as surrogates for neutron capture into compound states. Although these reactions are of great applicability, no theoretical efforts have been invested in this direction over the last decade. Purpose: The goal of this work is to establish on firm grounds a theory for deuteron-induced neutron-capture reactions. This includes formulating elastic and inelastic breakup in a consistent manner. Method: We describe this process both in post- and prior-form distorted wave Born approximation following previous works and discuss the differences in the formulation. While the convergence issues arising in the post formulation can be overcome in the priormore » formulation, in this case one still needs to take into account additional terms due to nonorthogonality. Results: We apply our method to the Nb93(d,p)X at Ed=15 and 25 MeV and are able to obtain a good description of the data. We then look at the various partial wave contributions, as well as elastic versus inelastic contributions. We also connect our formulation with transfer to neutron bound states.Conclusions: Our calculations demonstrate that the nonorthogonality term arising in the prior formulation is significant and is at the heart of the long-standing controversy between the post and the prior formulations of the theory. We also show that the cross sections for these reactions are angular-momentum dependent and therefore the commonly used Weisskopf limit is inadequate. We finally make important predictions for the relative contributions of elastic breakup and nonelastic breakup and call for elastic-breakup measurements to further constrain our model.« less
Dependence of tropical cyclone development on coriolis parameter: A theoretical model
NASA Astrophysics Data System (ADS)
Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda
2018-03-01
A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.
Theoretical relation between halo current-plasma energy displacement/deformation in EAST
NASA Astrophysics Data System (ADS)
Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen
2018-04-01
In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
Formulation and method for preparing gels comprising hydrous aluminum oxide
Collins, Jack L.
2014-06-17
Formulations useful for preparing hydrous aluminum oxide gels contain a metal salt including aluminum, an organic base, and a complexing agent. Methods for preparing gels containing hydrous aluminum oxide include heating a formulation to a temperature sufficient to induce gel formation, where the formulation contains a metal salt including aluminum, an organic base, and a complexing agent.
Formulation and method for preparing gels comprising hydrous cerium oxide
Collins, Jack L; Chi, Anthony
2013-05-07
Formulations useful for preparing hydrous cerium oxide gels contain a metal salt including cerium, an organic base, and a complexing agent. Methods for preparing gels containing hydrous cerium oxide include heating a formulation to a temperature sufficient to induce gel formation, where the formulation contains a metal salt including cerium, an organic base, and a complexing agent.
Gamma-Ray Bursts and Fast Transients. Multi-wavelength Observations and Multi-messenger Signals
NASA Astrophysics Data System (ADS)
Willingale, R.; Mészáros, P.
2017-07-01
The current status of observations and theoretical models of gamma-ray bursts and some other related transients, including ultra-long bursts and tidal disruption events, is reviewed. We consider the impact of multi-wavelength data on the formulation and development of theoretical models for the prompt and afterglow emission including the standard fireball model utilizing internal shocks and external shocks, photospheric emission, the role of the magnetic field and hadronic processes. In addition, we discuss some of the prospects for non-photonic multi-messenger detection and for future instrumentation, and comment on some of the outstanding issues in the field.
Vehicular headways on signalized intersections: theory, models, and reality
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Šleis, Jiří
2015-01-01
We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.
A practical guide to density matrix embedding theory in quantum chemistry
Wouters, Sebastian; Jimenez-Hoyos, Carlos A.; Sun, Qiming; ...
2016-05-09
Density matrix embedding theory (DMET) (Knizia, G.; Chan, G. K.-L. Phys. Rev. Lett. 2012, 109, 186404) provides a theoretical framework to treat finite fragments in the presence of a surrounding molecular or bulk environment, even when there is significant correlation or entanglement between the two. In this work, we give a practically oriented and explicit description of the numerical and theoretical formulation of DMET. Here, we also describe in detail how to perform self-consistent DMET optimizations. We explore different embedding strategies with and without a self-consistency condition in hydrogen rings, beryllium rings, and a sample SN2 reaction.
Gao, Dashan; Vasconcelos, Nuno
2009-01-01
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
[Use of theories and models on papers of a Latin-American journal in public health, 2000 to 2004].
Cabrera Arana, Gustavo Alonso
2007-12-01
To characterize frequency and type of use of theories or models on papers of a Latin-American journal in public health between 2000 and 2004. The Revista de Saúde Pública was chosen because of its history of periodic publication without interruption and current impact on the scientific communication of the area. A standard procedure was applied for reading and classifying articles in an arbitrary typology of four levels, according to the depth of the use of models or theoretical references to describe problems or issues, to formulate methods and to discuss results. Of 482 articles included, 421 (87%) were research studies, 42 (9%) reviews or special contributions and 19 (4%) opinion texts or assays . Of 421 research studies, 286 (68%) had a quantitative focus, 110 (26%) qualitative and 25 (6%) mixed. Reference to theories or models is uncommon, only 90 (19%) articles mentioned a theory or model. According to the depth of the use, 29 (6%) were classified as type I, 9 (2%) as type II, 6 (1.3%) were type III and the 46 remaining texts (9.5%) were type IV. Reference to models was nine-fold more frequent than the use of theoretical references. The ideal use, type IV, occurred in one of every ten articles studied. It is of relevance to show theoretical and models frames used when approaching topics, formulating hypothesis, designing methods and discussing findings in papers.
A monopole model for annihilation line emission from the Galactic center
NASA Astrophysics Data System (ADS)
Wang, D. Y.; Peng, Q. H.
Two traditional theoretical interpretations of the observed plasmapause are compared, namely, the plasmapause as: (1) the boundary between closed flux tubes that have been in the inner magnetosphere for several days and those that have recently drifted in from the magnetotail or (2) the last closed electric equipotential. Although the two interpretations become equivalent in the case where the electric-field pattern is steady for several days, interpretation 1 seems theoretically more secure for typical magnetospheric conditions. The results of old theoretical studies of the effects of time variations in the electric-field pattern on the shape of the plasmapause are reviewed briefly. The formulation of the present version of the Rice Convection Model is also reviewed. Preliminary results of recent computations of quiet-time electric fields, carried out with this model, are presented and discussed.
Riddles of masculinity: gender, bisexuality, and thirdness.
Fogel, Gerald I
2006-01-01
Clinical examples are used to illuminate several riddles of masculinity-ambiguities, enigmas, and paradoxes in relation to gender, bisexuality, and thirdness-frequently seen in male patients. Basic psychoanalytic assumptions about male psychology are examined in the light of advances in female psychology, using ideas from feminist and gender studies as well as important and now widely accepted trends in contemporary psychoanalytic theory. By reexamining basic assumptions about heterosexual men, as has been done with ideas concerning women and homosexual men, complexity and nuance come to the fore to aid the clinician in treating the complex characterological pictures seen in men today. In a context of rapid historical and theoretical change, the use of persistent gender stereotypes and unnecessarily limiting theoretical formulations, though often unintended, may mask subtle countertransference and theoretical blind spots, and limit optimal clinical effectiveness.
Thermoacoustics of solids: A pathway to solid state engines and refrigerators
NASA Astrophysics Data System (ADS)
Hao, Haitian; Scalo, Carlo; Sen, Mihir; Semperlotti, Fabio
2018-01-01
Thermoacoustic oscillations have been one of the most exciting discoveries of the physics of fluids in the 19th century. Since its inception, scientists have formulated a comprehensive theoretical explanation of the basic phenomenon which has later found several practical applications to engineering devices. To date, all studies have concentrated on the thermoacoustics of fluid media where this fascinating mechanism was exclusively believed to exist. Our study shows theoretical and numerical evidence of the existence of thermoacoustic instabilities in solid media. Although the underlying physical mechanism exhibits some interesting similarities with its counterpart in fluids, the theoretical framework highlights relevant differences that have important implications on the ability to trigger and sustain the thermoacoustic response. This mechanism could pave the way to the development of highly robust and reliable solid-state thermoacoustic engines and refrigerators.
Game theoretic power allocation and waveform selection for satellite communications
NASA Astrophysics Data System (ADS)
Shu, Zhihui; Wang, Gang; Tian, Xin; Shen, Dan; Pham, Khanh; Blasch, Erik; Chen, Genshe
2015-05-01
Game theory is a useful method to model interactions between agents with conflicting interests. In this paper, we set up a Game Theoretic Model for Satellite Communications (SATCOM) to solve the interaction between the transmission pair (blue side) and the jammer (red side) to reach a Nash Equilibrium (NE). First, the IFT Game Application Model (iGAM) for SATCOM is formulated to improve the utility of the transmission pair while considering the interference from a jammer. Specifically, in our framework, the frame error rate performance of different modulation and coding schemes is used in the game theoretic solution. Next, the game theoretic analysis shows that the transmission pair can choose the optimal waveform and power given the received power from the jammer. We also describe how the jammer chooses the optimal power given the waveform and power allocation from the transmission pair. Finally, simulations are implemented for the iGAM and the simulation results show the effectiveness of the SATCOM power allocation, waveform selection scheme, and jamming mitigation.
Element free Galerkin formulation of composite beam with longitudinal slip
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal
2015-05-15
Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less
Adaptive control of periodic systems
NASA Astrophysics Data System (ADS)
Tian, Zhiling
2009-12-01
Adaptive control is needed to cope with parametric uncertainty in dynamical systems. The adaptive control of LTI systems in both discrete and continuous time has been studied for four decades and the results are currently used widely in many different fields. In recent years, interest has shifted to the adaptive control of time-varying systems. It is known that the adaptive control of arbitrarily rapidly time-varying systems is in general intractable, but systems with periodically time-varying parameters (LTP systems) which have much more structure, are amenable to mathematical analysis. Further, there is also a need for such control in practical problems which have arisen in industry during the past twenty years. This thesis is the first attempt to deal with the adaptive control of LTP systems. Adaptive Control involves estimation of unknown parameters, adjusting the control parameters based on the estimates, and demonstrating that the overall system is stable. System theoretic properties such as stability, controllability, and observability play an important role both in formulating of the problems, as well as in generating solutions for them. For LTI systems, these properties have been studied since 1960s, and algebraic conditions that have to be satisfied to assure these properties are now well established. In the case of LTP systems, these properties can be expressed only in terms of transition matrices that are much more involved than those for LTI systems. Since adaptive control problems can be formulated only when these properties are well understood, it is not surprising that systematic efforts have not been made thus far for formulating and solving adaptive control problems that arise in LTP systems. Even in the case of LTI systems, it is well recognized that problems related to adaptive discrete-time system are not as difficult as those that arise in the continuous-time systems. This is amply evident in the solutions that were derived in the 1980s and 1990s for all the important problems. These differences are even more amplified in the LTP case; some problems in continuous time cannot even be formulated precisely. This thesis consequently focuses primarily on the adaptive identification and control of discrete-time systems, and derives most of the results that currently exist in the literature for LTI systems. Based on these investigations of discrete-time adaptive systems, attempts are made in the thesis to examine their continuous-time counterparts, and discuss the principal difficulties encountered. The dissertation examines critically the system theoretic properties of LTP systems in Chapter 2, and the mathematical framework provided for their analysis by Floquet theory in Chapter 3. Assuming that adaptive identification and control problems can be formulated precisely, a unified method of developing stable adaptive laws using error models is treated in Chapter 4. Chapter 5 presents a detailed study of the adaptation in SISO discrete-time LTP systems, and represents the core of the thesis. The important problems of identification, stabilization, regulation, and tracking of arbitrary signals are investigated, and practically implementable stable adaptive laws are derived. The dissertation concludes with a discussion of continuous-time adaptive control in Chapter 6 and discrete multivariable systems in Chapter 7. Directions for future research are indicated towards the end of the dissertation.
Formulation and method for preparing gels comprising hydrous hafnium oxide
Collins, Jack L; Hunt, Rodney D; Montgomery, Frederick C
2013-08-06
Formulations useful for preparing hydrous hafnium oxide gels contain a metal salt including hafnium, an acid, an organic base, and a complexing agent. Methods for preparing gels containing hydrous hafnium oxide include heating a formulation to a temperature sufficient to induce gel formation, where the formulation contains a metal salt including hafnium, an acid, an organic base, and a complexing agent.
NASA Astrophysics Data System (ADS)
Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan
2017-07-01
In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Application of the R-matrix method to photoionization of molecules.
Tashiro, Motomichi
2010-04-07
The R-matrix method has been used for theoretical calculation of electron collision with atoms and molecules for long years. The method was also formulated to treat photoionization process, however, its application has been mostly limited to photoionization of atoms. In this work, we implement the R-matrix method to treat molecular photoionization problem based on the UK R-matrix codes. This method can be used for diatomic as well as polyatomic molecules, with multiconfigurational description for electronic states of both target neutral molecule and product molecular ion. Test calculations were performed for valence electron photoionization of nitrogen (N(2)) as well as nitric oxide (NO) molecules. Calculated photoionization cross sections and asymmetry parameters agree reasonably well with the available experimental results, suggesting usefulness of the method for molecular photoionization.
Research on carbon emission driving factors of China’s provincial construction industry
NASA Astrophysics Data System (ADS)
Shang, Mei; Dong, Rui; Fu, Yujie; Hao, Wentao
2018-03-01
As a pillar industry of the national economy, the damage to the environment by construction industry can not be ignored. In the context of low carbon development, identifying the main driving factors for the carbon emission of the provincial construction industry are the key for the local government to formulate the development strategy for construction. In the paper, based on the Kaya factor decomposition method, the carbon intensity of the energy structure, energy intensity and the impact of the construction output on the carbon emission of provincial construction industry are studied, and relevant suggestions for low carbon development of provincial construction industry are proposed. The conclusion of this paper provides a theoretical basis for the early realization of low-carbon development in China’s provincial construction industry.
Phoretic self-propulsion: a mesoscopic description of reaction dynamics that powers motion.
de Buyl, Pierre; Kapral, Raymond
2013-02-21
The fabrication of synthetic self-propelled particles and the experimental investigations of their dynamics have stimulated interest in self-generated phoretic effects that propel nano- and micron-scale objects. Theoretical modeling of these phenomena is often based on a continuum description of the solvent for different phoretic propulsion mechanisms, including, self-electrophoresis, self-diffusiophoresis and self-thermophoresis. The work in this paper considers various types of catalytic chemical reaction at the motor surface and in the bulk fluid that come into play in mesoscopic descriptions of the dynamics. The formulation is illustrated by developing the mesoscopic reaction dynamics for exothermic and dissociation reactions that are used to power motor motion. The results of simulations of the self-propelled dynamics of composite Janus particles by these mechanisms are presented.
Stability of anisotropic self-gravitating fluids
NASA Astrophysics Data System (ADS)
Ahmad, S.; Jami, A. Rehman; Mughal, M. Z.
2018-06-01
The aim of this paper is to study the stability as well as the existence of self-gravitating anisotropic fluids in Λ-dominated era. Taking a cylindrically symmetric and static spacetime, we computed the corresponding equations of motion in the background of anisotropic fluid distributions. The realistic formulation of energy momentum tensor as well as theoretical model of the scale factors are considered in order to describe some physical properties of the anisotropic fluids. To find the stability of the compact star, we have used Herrera’s technique which is based on finding the radial and the transverse components of the speed of sound. Moreover, the behaviors of other physical quantities are also discussed like anisotropy, matching conditions of interior metric and exterior metric and compactness of the compact structures are also discussed.
Characterizing a Model of Coronal Heating and Solar Wind Acceleration Based on Wave Turbulence.
NASA Astrophysics Data System (ADS)
Downs, C.; Lionello, R.; Mikic, Z.; Linker, J.; Velli, M.
2014-12-01
Understanding the nature of coronal heating and solar wind acceleration is a key goal in solar and heliospheric research. While there have been many theoretical advances in both topics, including suggestions that they may be intimately related, the inherent scale coupling and complexity of these phenomena limits our ability to construct models that test them on a fundamental level for realistic solar conditions. At the same time, there is an ever increasing impetus to improve our spaceweather models, and incorporating treatments for these processes that capture their basic features while remaining tractable is an important goal. With this in mind, I will give an overview of our exploration of a wave-turbulence driven (WTD) model for coronal heating and solar wind acceleration based on low-frequency Alfvénic turbulence. Here we attempt to bridge the gap between theory and practical modeling by exploring this model in 1D HD and multi-dimensional MHD contexts. The key questions that we explore are: What properties must the model possess to be a viable model for coronal heating? What is the influence of the magnetic field topology (open, closed, rapidly expanding)? And can we simultaneously capture coronal heating and solar wind acceleration with such a quasi-steady formulation? Our initial results suggest that a WTD based formulation performs adequately for a variety of solar and heliospheric conditions, while significantly reducing the number of free parameters when compared to empirical heating and solar wind models. The challenges, applications, and future prospects of this type of approach will also be discussed.
Stochastic Online Learning in Dynamic Networks under Unknown Models
2016-08-02
Repeated Game with Incomplete Information, IEEE International Conference on Acoustics, Speech, and Signal Processing. 20-MAR-16, Shanghai, China...in a game theoretic framework for the application of multi-seller dynamic pricing with unknown demand models. We formulated the problem as an...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning
NASA Technical Reports Server (NTRS)
Sadler, S. G.
1972-01-01
A mathematical model and computer program was implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. Volume 1 (NASA CR-2110) contains the theoretical formulation and analysis of results. Volume 2 contains the computer program listing.
ERIC Educational Resources Information Center
van de Werfhorst, Herman G.
2011-01-01
A theoretical approach is formulated that connects various theories of why education has an effect on labour market outcomes with institutional settings in which such theories provide the most likely mechanism. Three groups of mechanisms are distinguished: education as an indicator of productive skills, as a positional good and as a means for…
ERIC Educational Resources Information Center
Bergee, Martin J.; Westfall, Claude R.
2005-01-01
This is the third study in a line of inquiry whose purpose has been to develop a theoretical model of selected extra musical variables' influence on solo and small-ensemble festival ratings. Authors of the second of these (Bergee & McWhirter, 2005) had used binomial logistic regression as the basis for their model-formulation strategy. Their…
A supersonic, three-dimensional code for flow over blunt bodies: User's manual
NASA Technical Reports Server (NTRS)
Chaussee, D. S.; Mcmillan, O. J.
1980-01-01
A computer code is described which may be used to calculate the steady, supersonic, three-dimensional, inviscid flow over blunt bodies. The theoretical and numerical formulation of the problem is given (shock-capturing, downstream marching), including exposition of the boundary and initial conditions. The overall flow logic of the program, its usage, accuracy, and limitations are discussed.
Portent of Heine's Reciprocal Square Root Identity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohl, H W
Precise efforts in theoretical astrophysics are needed to fully understand the mechanisms that govern the structure, stability, dynamics, formation, and evolution of differentially rotating stars. Direct computation of the physical attributes of a star can be facilitated by the use of highly compact azimuthal and separation angle Fourier formulations of the Green's functions for the linear partial differential equations of mathematical physics.
Objectives Stated for the Use of Literature at School: An Empirical Analysis, Part I.
ERIC Educational Resources Information Center
Klingberg, Gote; Agren, Bengt
This report presents a theoretical basis for literary education through goal analyses. The object of the analyses is to obtain clearer formulations of the subgoals of instruction with the help of literature, and to arrange them in logical sequence. Using 79 sources from 12 countries, an empirical study was made, and goal descriptions were…
ERIC Educational Resources Information Center
Miller, Thomas W., Ed.
Clinical theory and practice models are provided along with current concepts in diagnosis and treatment. Theoretical formulations, hypotheses, issues, and implications related to life stress measurement are addressed and applied to medical and mental health concerns. Contributions include: (1) "Stress Response and Adaptation in Children:…
DNA/RNA-based formulations for treatment of breast cancer.
Xie, Zhaolu; Zeng, Xianghui
2017-12-01
To develop a successful formulation for the gene therapy of breast cancer, an effective therapeutic nucleic acid and a proper delivery system are essential. Increased understanding of breast cancer, and developments in biotechnology, material science and nanotechnology have provided a major impetus in the development of effective formulations for the gene therapy of breast cancer. Areas covered: We discuss DNA/RNA-based formulations that can inhibit the growth of breast cancer cells and control the progress of breast cancer. Targets for the gene therapy of breast cancer, DNA/RNA-based therapeutics and delivery systems are summarized. And examples of successful DNA/RNA-based formulations for breast cancer gene therapy are reviewed. Expert opinion: Several challenges remain in developing effective DNA/RNA-based formulations for treatment of breast cancer. Firstly, most of the currently utilized targets are not effective enough as monotherapy for breast cancer. Secondly, the requirements for co-delivery system make the preparation of formulation more complicated. Thirdly, nanoparticles with the modification of tumor-targeting ligands could be more unstable in circulation and normal tissues. Lastly, immune responses against the viral vectors are unfavorable for the gene therapy of breast cancer because of the damage to the host and the impaired therapeutic ability.
Novel microemulsion-based gel formulation of tazarotene for therapy of acne.
Patel, Mrunali Rashmin; Patel, Rashmin Bharatbhai; Parikh, Jolly R; Patel, Bharat G
2016-12-01
The objective of this study was to develop and evaluate a novel microemulsion based gel formulation containing tazarotene for targeted topical therapy of acne. Psudoternary phase diagrams were constructed to obtain the concentration range of oil, surfactant, and co-surfactant for microemulsion formation. The optimized microemulsion formulation containing 0.05% tazarotene was formulated by spontaneous microemulsification method consisting of 10% Labrafac CC, mixed emulsifiers 15% Labrasol-Cremophor-RH 40 (1:1), 15% Capmul MCM, and 60% distilled water (w/w) as an external phase. All plain and tazarotene-loaded microemulsions were clear and showed physicochemical parameters for desired topical delivery and stability. The permeation profiles of tazarotene through rat skin from optimized microemulsion formulation followed the Higuchi model for controlled permeation. Microemulsion-based gel was prepared by incorporating Carbopol®971P NF in optimized microemulsion formulation having suitable skin permeation rate and skin uptake. Microemulsion-based gel showed desired physicochemical parameters and demonstrated advantage over marketed formulation in improving the skin tolerability of tazarotene indicating its potential in improving its topical delivery. The developed microemulsion-based gel may be a potential drug delivery vehicle for targeted topical delivery of tazarotene in the treatment of acne.
Vanlaeys, Alison; Dubuisson, Florine; Seralini, Gilles-Eric; Travert, Carine
2018-06-04
Roundup and Glyphogan are glyphosate-based herbicides containing the same concentration of glyphosate and confidential formulants. Formulants are declared as inert diluents but some are more toxic than glyphosate, such as the family of polyethoxylated alkylamines (POEA). We tested glyphosate alone, glyphosate-based herbicide formulations and POEA on the immature mouse Sertoli cell line (TM4), at concentrations ranging from environmental to agricultural-use levels. Our results show that formulations of glyphosate-based herbicides induce TM4 mitochondrial dysfunction (like glyphosate, but to a lesser extent), disruption of cell detoxification systems, lipid droplet accumulation and mortality at sub-agricultural doses. Formulants, especially those present in Glyphogan, are more deleterious than glyphosate and thus should be considered as active principles of these pesticides. Lipid droplet accumulation after acute exposure to POEA suggests the rapid penetration and accumulation of formulants, leading to mortality after 24 h. As Sertoli cells are essential for testicular development and normal onset of spermatogenesis, disturbance of their function by glyphosate-based herbicides could contribute to disruption of reproductive function demonstrated in mammals exposed to these pesticides at a prepubertal stage of development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.
Material Properties Governing Co-Current Flame Spread: The Effect of Air Entrainment
NASA Technical Reports Server (NTRS)
Coutin, Mickael; Rangwala, Ali S.; Torero, Jose L.; Buckley, Steven G.
2003-01-01
A study on the effects of lateral air entrainment on an upward spreading flame has been conducted. The fuel is a flat PMMA plate of constant length and thickness but variable width. Video images and surface temperatures have allowed establishing the progression of the pyrolyis front and on the flame stand-off distance. These measurements have been incorporated into a theoretical formulation to establish characteristic mass transfer numbers ("B" numbers). The mass transfer number is deemed as a material related parameter that could be used to assess the potential of a material to sustain co-current flame spread. The experimental results show that the theoretical formulation fails to describe heat exchange between the flame and the surface. The discrepancies seem to be associated to lateral air entrainment that lifts the flame off the surface and leads to an over estimation of the local mass transfer number. Particle Image Velocimetry (PIV) measurements are in the process of being acquired. These measurements are intended to provide insight on the effect of air entrainment on the flame stand-off distance. A brief description of the methodology to be followed is presented here.
The basic traumatic situation in the analytical relationship.
Hartke, Raul
2005-04-01
The author attempts to develop a concept of psychic trauma which would comply with the nucleus of this Freudian notion, that is, an excess of excitations that cannot be processed by the mental apparatus, but which would also consider the functions and the crucial role of objects in the constitution of the psychism and in traumatic conditions, as well as taking into account the methodological positioning according to which the analytical relationship is the sole possible locus of observation, inference and intervention by the psychoanalyst. He considers as a basic or minimal traumatic psychoanalytical situation that in which a magnitude or quality of emotions exceeds the capacity for containment of the psychoanalytical pair, to the point of generating a period or area of dementalisation in the psyche of one or both of the participants, of requiring analytical work on the matter and promoting a significant positive or negative change in the relationship. Availing himself of Bion's theory about the alpha function and the metapsychological conceptions of Freud and Green concerning psychic representations, he presents two theoretical formulations relating to this traumatic situation, utilising them according to the 'altered focus' model proposed by Bion. He presents three clinical examples to illustrate the concept and the relevant theoretical formulations.
Effect of Amphiphiles on the Rheology of Triglyceride Networks
NASA Astrophysics Data System (ADS)
Seth, Jyoti
2014-11-01
Networks of aggregated crystallites form the structural backbone of many products from the food, cosmetic and pharmaceutical industries. Such materials are generally formulated by cooling a saturated solution to yield the desired solid fraction. Crystal nucleation and growth followed by aggregation leads to formation of a space percolating fractal-network. It is understood that microstructural hierarchy and particle-particle interactions determine material behavior during processing, storage and use. In this talk, rheology of suspensions of triglycerides (TAG, like tristearin) will be explored. TAGs exhibit a rich assortment of polymorphs and form suspensions that are evidently sensitive to surface modifying additives like surfactants and polymers. Here, a theoretical framework will be presented for suspensions containing TAG crystals interacting via pairwise potentials. The work builds on existing models of fractal aggregates to understand microstructure and its correlation with material rheology. Effect of amphiphilic additives is derived through variation of particle-particle interactions. Theoretical predictions for storage modulus will be compared against experimental observations and data from the literature and micro structural predictions against microscopy. Such a theory may serve as a step towards predicting short and long-term behavior of aggregated suspensions formulated via crystallization.
Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane
2018-05-01
This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.
Repellent effect of microencapsulated essential oil in lotion formulation against mosquito bites.
Misni, Norashiqin; Nor, Zurainee Mohamed; Ahmad, Rohani
2017-01-01
Many essential oils have been reported as natural sources of insect repellents; however, due to high volatility, they present low repellent effect. Formulation technique by using microencapsulation enables to control the volatility of essential oil and thereby extends the duration of repellency. In this study, the effectiveness of microencapsulated essential oils of Alpinia galanga, Citrus grandis and C. aurantifolia in the lotion formulations were evaluated against mosquito bites. Essential oils and N,N-Diethyl-3-methylbenzamide (DEET) were encapsulated by using interfacial pre- cipitation techniques before incorporation into lotion base to form microencapsulated (ME) formulation. The pure essential oil and DEET were also prepared into lotion base to produce non-encapsulated (NE) formulation. All the prepared formulations were assessed for their repellent activity against Culex quinquefasciatus under laboratory condition. Field evaluations also were conducted in three different study sites in Peninsular Malaysia. In addi- tion, Citriodiol® (Mosiquard®) and citronella-based repellents (KAPS®, MozAway® and BioZ Natural®) were also included for comparison. In laboratory conditions, the ME formulations of the essential oils showed no significant difference with regard to the duration of repellent effect compared to the microencapsulated DEET used at the highest con- centration (20%). It exhibited >98% repellent effect for duration of 4 h (p = 0.06). In the field conditions, these formulations demonstrated comparable repellent effect (100% for a duration of 3 h) to Citriodiol® based repellent (Mosiguard®) (p = 0.07). In both test conditions, the ME formulations of the essential oils presented longer duration of 100% repellent effect (between 1 and 2 h) compared to NE formulations. The findings of the study demonstrate that the application of the microencapsulation technique during the preparation of the formulations significantly increases the duration of the repellent effect of the essential oils, suggesting that the ME formulation of essential oils have potential to be commercialized as an alternative plant-based repellent in the market against the mosquitoes.