Sample records for case formulation approach

  1. Case formulation and management using pattern-based formulation (PBF) methodology: clinical case 1.

    PubMed

    Fernando, Irosh; Cohen, Martin

    2014-02-01

    A tool for psychiatric case formulation known as pattern-based formulation (PBF) has been recently introduced. This paper presents an application of this methodology in formulating and managing complex clinical cases. The symptomatology of the clinical presentation has been parsed into individual clinical phenomena and interpreted by selecting explanatory models. The clinical presentation demonstrates how PBF has been used as a clinical tool to guide clinicians' thinking, that takes a structured approach to manage multiple issues using a broad range of management strategies. In doing so, the paper also introduces a number of patterns related to the observed clinical phenomena that can be re-used as explanatory models when formulating other clinical cases. It is expected that this paper will assist clinicians, and particularly trainees, to better understand PBF methodology and apply it to improve their formulation skills.

  2. Case study to illustrate an approach for detecting contamination and impurities in pesticide formulations.

    PubMed

    Karasali, Helen; Kasiotis, Konstantinos M; Machera, Kyriaki; Ambrus, Arpad

    2014-11-26

    Counterfeit pesticides threaten public health, food trade, and the environment. The present work draws attention to the importance of regular monitoring of impurities in formulated pesticide products. General screening revealed the presence of carbaryl as a contaminant in a copper oxychloride formulated product. In this paper, as a case study, a liquid chromatographic diode array-mass spectrometric method developed for general screening of pesticide products and quantitative determination of carbaryl together with its validation is presented. The proposed testing strategy is considered suitable for use as a general approach for testing organic contaminants and impurities in solid pesticide formulations.

  3. Shared written case formulations and weight change in outpatient therapy for anorexia nervosa: a naturalistic single case series.

    PubMed

    Gladwin, Alice M; Evangeli, Michael

    2013-01-01

    The therapeutic effects of written shared case formulations are underexplored and have not been examined in anorexia nervosa. This study explored the relationship between (a) the delivery (b) the quality of a shared written case formulation and weight in outpatient psychological therapy for anorexia nervosa. A naturalistic single case series approach was used to examine the case notes of women who had attended a specialist eating disorders service over a 2-year period. The case notes of 15 adult women who had undergone outpatient psychological therapy for anorexia nervosa with a shared written case formulation component were reviewed. The impact of the quality of the case formulation on weekly weight was examined for 14 of the clients where the case formulation was available. The nature of the relationship between the delivery of the written shared case formulation and weight was examined for all 15 clients. There was some evidence to support an association between delivery of the shared written case formulation and weight changes (both weight gain [five out of 15 clients] and weight loss [three out of 15 clients]) in individual cases. Higher case formulation quality was related to cases where weight change did not occur. The delivery of case formulations can be associated with important therapeutic change (both beneficial and potentially harmful) in anorexia nervosa. Future research into the causal mechanisms associated with sharing formulations will face the challenge of adopting strategies that allow for an in-depth exploration of complex therapy variables whilst overcoming methodological challenges. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Developing Emotion-Based Case Formulations: A Research-Informed Method.

    PubMed

    Pascual-Leone, Antonio; Kramer, Ueli

    2017-01-01

    New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Applying the Cultural Formulation Approach to Career Counseling with Latinas/os

    ERIC Educational Resources Information Center

    Flores, Lisa Y.; Ramos, Karina; Kanagui, Marlen

    2010-01-01

    In this article, the authors present two hypothetical cases, one of a Mexican American female college student and one of a Mexican immigrant adult male, and apply a culturally sensitive approach to career assessment and career counseling with each of these clients. Drawing from Leong, Hardin, and Gupta's cultural formulation approach (CFA) to…

  6. An Etiological Approach to Sexual Offender Assessment: CAse Formulation Incorporating Risk Assessment (CAFIRA).

    PubMed

    Craig, Leam A; Rettenberger, Martin

    2018-05-19

    Case formulations (CF) have been the cornerstone of effective practice in clinical psychology since the 1950s and now form one of the core competencies in clinical and forensic assessment. The use of CFs within forensic settings is becoming more relevant when working with offenders who have experienced significant trauma, suffered from personality disorder, and have displayed sexually abusive behavior. Furthermore, most North American and European jurisdictions insist that expert witnesses adopt an idiosyncratic approach to risk assessment and consider the characteristics of the individual as part of a wider formulation of the problem behavior. This article focuses specifically on CF incorporating risk assessment procedures of sexual offenders. While empirical support for the use of risk analysis and formulation in managing offending behavior generally, and sexual offending behavior in particular, is limited, there is mounting evidence to suggest that CF can improve understanding of an individual's problem sexual behaviors. We argue that by integrating risk formulations into the CF provides a conceptually robust link between the etiologically development of the problem sexual behavior and effective assessment and risk management of sexual offenders. As forensic treatment programs increasingly moved toward strength-based approaches, in keeping with the Risk-Need-Responsivity principles Andrews and Bonta (2004), and the Good Lives Model Ward and Stewart (Prof Psychol Res Pract 34:353-60, 2003) of offender rehabilitation, the use of CFs in the assessment, treatment, and management of sexual offenders is indispensable. We present an etiological framework for understanding risk in an individual sexual offender by integrating a case formulation model to include the use of (static, stable, and acute) actuarial and clinical risk assessment measures as well as protective risk factors, referred to as the CAse Formulation Incorporating Risk Assessment (CAFIRA) model.

  7. Formulation of Higher Education Institutional Strategy Using Operational Research Approaches

    ERIC Educational Resources Information Center

    Labib, Ashraf; Read, Martin; Gladstone-Millar, Charlotte; Tonge, Richard; Smith, David

    2014-01-01

    In this paper a framework is proposed for the formulation of a higher education institutional (HEI) strategy. This work provides a practical example, through a case study, to demonstrate how the proposed framework can be applied to the issue of formulation of HEI strategy. The proposed hybrid model is based on two operational research…

  8. Comparison of neurofuzzy logic and decision trees in discovering knowledge from experimental data of an immediate release tablet formulation.

    PubMed

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    Understanding of the cause-effect relationships between formulation ingredients, process conditions and product properties is essential for developing a quality product. However, the formulation knowledge is often hidden in experimental data and not easily interpretable. This study compares neurofuzzy logic and decision tree approaches in discovering hidden knowledge from an immediate release tablet formulation database relating formulation ingredients (silica aerogel, magnesium stearate, microcrystalline cellulose and sodium carboxymethylcellulose) and process variables (dwell time and compression force) to tablet properties (tensile strength, disintegration time, friability, capping and drug dissolution at various time intervals). Both approaches successfully generated useful knowledge in the form of either "if then" rules or decision trees. Although different strategies are employed by the two approaches in generating rules/trees, similar knowledge was discovered in most cases. However, as decision trees are not able to deal with continuous dependent variables, data discretisation procedures are generally required.

  9. Emotion-focused therapy in a case of anorexia nervosa.

    PubMed

    Dolhanty, Joanne; Greenberg, Leslie S

    2009-01-01

    An emotion-focused approach to the treatment of eating disorders and to case formulation is described in an individual with anorexia nervosa (AN). The basic theory of emotion-focused therapy (EFT), the steps of case formulation and an outline of the tasks and course of treatment of an individual recently hospitalized on an inpatient unit for eating disorders highlight key aspects of the approach. The transformation in this individual, in terms of gaining access to her internal experience, understanding and tolerating her emotions, and working through her core themes of insecure attachment and worthlessness, is described. Weight and scores on self-report measures at the outset of treatment and at 18 months are provided.

  10. Autonomous public organization policy: a case study for the health sector in Thailand.

    PubMed

    Rajataramya, B; Fried, B; van der Pütten, M; Pongpanich, S

    2009-09-01

    This paper describes factors affecting autonomous public organization (APO) policy agenda setting and policy formation through comparison of policy processes applied to one educational institute under the Ministry of Education and the other educational institute under the Ministry of Public Health in Thailand. This study employs mixed method including a qualitative approach through documentary research, in-depth interviews, and participant observation. Factors that facilitated the formulation of the APO policy were: (1) awareness of need; (2) clarity of strategies; (3) leadership, advocacy, and strategic partnerships, (4) clear organizational identity; (5) participatory approach to policy formulation, and (6) identification of a policy window. Factors that impeded the formulation of the APO policy were: (1) diverting political priorities; (2) ill-defined organizational identity; (3) fluctuating leadership direction, (4) inadequate participation of stakeholders; and (5) political instability. Although findings cannot be generalized, this case study does offer benchmarking for those in search of ways to enhance processes of policy formulation.

  11. On the correct representation of bending and axial deformation in the absolute nodal coordinate formulation with an elastic line approach

    NASA Astrophysics Data System (ADS)

    Gerstmayr, Johannes; Irschik, Hans

    2008-12-01

    In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.

  12. Freeze drying formulation using microscale and design of experiment approaches: a case study using granulocyte colony-stimulating factor.

    PubMed

    Grant, Yitzchak; Matejtschuk, Paul; Bird, Christopher; Wadhwa, Meenu; Dalby, Paul A

    2012-04-01

    The lyophilization of proteins in microplates, to assess and optimise formulations rapidly, has been applied for the first time to a therapeutic protein and, in particular, one that requires a cell-based biological assay, in order to demonstrate the broader usefulness of the approach. Factorial design of experiment methods were combined with lyophilization in microplates to identify optimum formulations that stabilised granulocyte colony-stimulating factor during freeze drying. An initial screen rapidly identified key excipients and potential interactions, which was then followed by a central composite face designed optimisation experiment. Human serum albumin and Tween 20 had significant effects on maintaining protein stability. As previously, the optimum formulation was then freeze-dried in stoppered vials to verify that the microscale data is relevant to pilot scales. However, to validate the approach further, the selected formulation was also assessed for solid-state shelf-life through the use of accelerated stability studies. This approach allows for a high-throughput assessment of excipient options early on in product development, while also reducing costs in terms of time and quantity of materials required.

  13. An Integrative Approach to Treatment-Resistant Obsessive-Compulsive Disorder.

    PubMed

    Woon, Luke Sy-Cherng; Kanapathy, Anita; Zakaria, Hazli; Alfonso, César A

    2017-01-01

    Obsessive-compulsive disorder (OCD) is a debilitating psychiatric disorder that often runs a chronic unremitting course. Treatment outcomes can be unsatisfactory despite the availability of various somatic and psychological therapies. Psychodynamic psychotherapy in combination with cognitive behavioral therapy (CBT) with exposure and response prevention (ERP) could help patients with treatment-resistant OCD achieve better outcomes. An integrative approach can help patients gain insight, strengthen the therapeutic alliance, improve treatment adherence, and provide symptomatic relief when other treatments seem insufficient or have failed. We describe the treatment process of a person with treatment-resistant OCD who received pharmacotherapy, concurrent CBT/ERP, and a brief course of psychodynamic psychotherapy. Case formulations from cognitive behavioral and psychodynamic perspectives are presented. The authors discuss the advantages of doing a psychodynamic assessment and formulation in treatment refractory cases and the wisdom of integrating psychotherapy interventions for OCD, as well as the unique clinical features of cases that warrant a multimodal treatment approach.

  14. Note on the ideal frame formulation

    NASA Astrophysics Data System (ADS)

    Lara, Martin

    2017-09-01

    An implementation of the ideal frame formulation of perturbed Keplerian motion is presented which only requires the integration of a differential system of dimension 7, contrary to the 8 variables traditionally integrated with this approach. The new formulation is based on the integration of a scaled version of the Eulerian set of redundant parameters and slightly improves runtime performance with respect to the 8-dimensional case while retaining comparable accuracy.

  15. The Pedagogy of Education Policy Formulation: Working from Policy Assets

    ERIC Educational Resources Information Center

    Sack, Richard; Marope, Mmantsetsa

    2007-01-01

    This article explores a "pedagogical" approach to education policy formulation in developing countries. This constitutes a process that shows promise in promoting the "ownership" necessary for sustainable policies and programs, especially when they rely on external financing. Based on case studies from 26 countries focused on "what works," the…

  16. Methodology of oral formulation selection in the pharmaceutical industry.

    PubMed

    Kuentz, Martin; Holm, René; Elder, David P

    2016-05-25

    Pharmaceutical formulations have to fulfil various requirements with respect to their intended use, either in the development phase or as a commercial product. New drug candidates with their specific properties confront the formulation scientist with industrial challenges for which a strategy is needed to cope with limited resources, stretched timelines as well as regulatory requirements. This paper aims at reviewing different methodologies to select a suitable formulation approach for oral delivery. Exclusively small-molecular drugs are considered and the review is written from an industrial perspective. Specific cases are discussed starting with an emphasis on poorly soluble compounds, then the topics of chemically labile drugs, low-dose compounds, and modified release are reviewed. Due to the broad scope of this work, a primary focus is on explaining basic concepts as well as recent trends. Different strategies are discussed to approach industrial formulation selection, which includes a structured product development. Examples for such structured development aim to provide guidance to formulators and finally, the recent topic of a manufacturing classification system is presented. It can be concluded that the field of oral formulation selection is particularly complex due to both multiple challenges as well as opportunities so that industrial scientists have to employ tailored approaches to design formulations successfully. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Evidence-informed policy formulation and implementation: a comparative case study of two national policies for improving health and social care in Sweden.

    PubMed

    Strehlenert, H; Richter-Sundberg, L; Nyström, M E; Hasson, H

    2015-12-08

    Evidence has come to play a central role in health policymaking. However, policymakers tend to use other types of information besides research evidence. Most prior studies on evidence-informed policy have focused on the policy formulation phase without a systematic analysis of its implementation. It has been suggested that in order to fully understand the policy process, the analysis should include both policy formulation and implementation. The purpose of the study was to explore and compare two policies aiming to improve health and social care in Sweden and to empirically test a new conceptual model for evidence-informed policy formulation and implementation. Two concurrent national policies were studied during the entire policy process using a longitudinal, comparative case study approach. Data was collected through interviews, observations, and documents. A Conceptual Model for Evidence-Informed Policy Formulation and Implementation was developed based on prior frameworks for evidence-informed policymaking and policy dissemination and implementation. The conceptual model was used to organize and analyze the data. The policies differed regarding the use of evidence in the policy formulation and the extent to which the policy formulation and implementation phases overlapped. Similarities between the cases were an emphasis on capacity assessment, modified activities based on the assessment, and a highly active implementation approach relying on networks of stakeholders. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was empirically useful to organize the data. The policy actors' roles and functions were found to have a great influence on the choices of strategies and collaborators in all policy phases. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was found to be useful. However, it provided insufficient guidance for analyzing actors involved in the policy process, capacity-building strategies, and overlapping policy phases. A revised version of the model that includes these aspects is suggested.

  18. A new formulation of electromagnetic wave scattering using an on-surface radiation boundary condition approach

    NASA Technical Reports Server (NTRS)

    Kriegsmann, Gregory A.; Taflove, Allen; Umashankar, Koradar R.

    1987-01-01

    A new formulation of electromagnetic wave scattering by convex, two-dimensional conducting bodies is reported. This formulation, called the on-surface radiation condition (OSRC) approach, is based upon an expansion of the radiation condition applied directly on the surface of a scatterer. It is now shown that application of a suitable radiation condition directly on the surface of a convex conducting scatterer can lead to substantial simplification of the frequency-domain integral equation for the scattered field, which is reduced to just a line integral. For the transverse magnetic case, the integrand is known explicitly. For the transverse electric case, the integrand can be easily constructed by solving an ordinary differential equation around the scatterer surface contour. Examples are provided which show that OSRC yields computed near and far fields which approach the exact results for canonical shapes such as the circular cylinder, square cylinder, and strip. Electrical sizes for the examples are ka = 5 and ka = 10. The new OSRC formulation of scattering may present a useful alternative to present integral equation and uniform high-frequency approaches for convex cylinders larger than ka = 1. Structures with edges or corners can also be analyzed, although more work is needed to incorporate the physics of singular currents at these discontinuities. Convex dielectric structures can also be treated using OSRC.

  19. Comparison of methods for developing the dynamics of rigid-body systems

    NASA Technical Reports Server (NTRS)

    Ju, M. S.; Mansour, J. M.

    1989-01-01

    Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.

  20. A Practical Guide for the Formulation of Propositions in the Bayesian Approach to DNA Evidence Interpretation in an Adversarial Environment.

    PubMed

    Gittelson, Simone; Kalafut, Tim; Myers, Steven; Taylor, Duncan; Hicks, Tacha; Taroni, Franco; Evett, Ian W; Bright, Jo-Anne; Buckleton, John

    2016-01-01

    The interpretation of complex DNA profiles is facilitated by a Bayesian approach. This approach requires the development of a pair of propositions: one aligned to the prosecution case and one to the defense case. This note explores the issue of proposition setting in an adversarial environment by a series of examples. A set of guidelines generalize how to formulate propositions when there is a single person of interest and when there are multiple individuals of interest. Additional explanations cover how to handle multiple defense propositions, relatives, and the transition from subsource level to activity level propositions. The propositions depend on case information and the allegations of each of the parties. The prosecution proposition is usually known. The authors suggest that a sensible proposition is selected for the defense that is consistent with their stance, if available, and consistent with a realistic defense if their position is not known. © 2015 American Academy of Forensic Sciences.

  1. Broadband Noise Predictions Based on a New Aeroacoustic Formulation

    NASA Technical Reports Server (NTRS)

    Casper, J.; Farassat, F.

    2002-01-01

    A new analytic result in acoustics called 'Formulation 1B,' proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far-field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is specified analytically from a result that is based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B, and to demonstrate its equivalence to Formulation 1A, of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. The predicted results also agree very well with those of Paterson and Amiet, who used a frequency-domain approach. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  2. A finite element formulation for scattering from electrically large 2-dimensional structures

    NASA Technical Reports Server (NTRS)

    Ross, Daniel C.; Volakis, John L.

    1992-01-01

    A finite element formulation is given using the scattered field approach with a fictitious material absorber to truncate the mesh. The formulation includes the use of arbitrary approximation functions so that more accurate results can be achieved without any modification to the software. Additionally, non-polynomial approximation functions can be used, including complex approximation functions. The banded system that results is solved with an efficient sparse/banded iterative scheme and as a consequence, large structures can be analyzed. Results are given for simple cases to verify the formulation and also for large, complex geometries.

  3. A generalized volumetric dispersion model for a class of two-phase separation/reaction: finite difference solutions

    NASA Astrophysics Data System (ADS)

    Siripatana, Chairat; Thongpan, Hathaikarn; Promraksa, Arwut

    2017-03-01

    This article explores a volumetric approach in formulating differential equations for a class of engineering flow problems involving component transfer within or between two phases. In contrast to conventional formulation which is based on linear velocities, this work proposed a slightly different approach based on volumetric flow-rate which is essentially constant in many industrial processes. In effect, many multi-dimensional flow problems found industrially can be simplified into multi-component or multi-phase but one-dimensional flow problems. The formulation is largely generic, covering counter-current, concurrent or batch, fixed and fluidized bed arrangement. It was also intended to use for start-up, shut-down, control and steady state simulation. Since many realistic and industrial operation are dynamic with variable velocity and porosity in relation to position, analytical solutions are rare and limited to only very simple cases. Thus we also provide a numerical solution using Crank-Nicolson finite difference scheme. This solution is inherently stable as tested against a few cases published in the literature. However, it is anticipated that, for unconfined flow or non-constant flow-rate, traditional formulation should be applied.

  4. Group theoretical formulation of free fall and projectile motion

    NASA Astrophysics Data System (ADS)

    Düztaş, Koray

    2018-07-01

    In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.

  5. Antenna Performance Influenced by the Finite Extent and Conductivity of Ground Planes: A Collection of Reprints

    DTIC Science & Technology

    1990-09-01

    FORMULATION OF PROBLEM denoted by AZ and is given by With reference to a cylindrical polar coordinate 17-Z-Zs.- P, 1." * ()d. (4a) system (p,O,Z) the...without limit as a approaches zero. This formulation is not actually valid in this limiting case since one terminal of the generator would then be connected...current. APPE.IXx I Formulation of the input impedance. An expression is here for- mulated for the input impedance at the terminals of an antenna

  6. Parent formulation at the Lagrangian level

    NASA Astrophysics Data System (ADS)

    Grigoriev, Maxim

    2011-07-01

    The recently proposed first-order parent formalism at the level of equations of motion is specialized to the case of Lagrangian systems. It is shown that for diffeomorphism-invariant theories the parent formulation takes the form of an AKSZ-type sigma model. The proposed formulation can be also seen as a Lagrangian version of the BV-BRST extension of the Vasiliev unfolded approach. We also discuss its possible interpretation as a multidimensional generalization of the Hamiltonian BFV-BRST formalism. The general construction is illustrated by examples of (parametrized) mechanics, relativistic particle, Yang-Mills theory, and gravity.

  7. Attractor behaviour in multifield inflation

    NASA Astrophysics Data System (ADS)

    Carrilho, Pedro; Mulryne, David; Ronayne, John; Tenkanen, Tommi

    2018-06-01

    We study multifield inflation in scenarios where the fields are coupled non-minimally to gravity via ξI(phiI)n gμνRμν, where ξI are coupling constants, phiI the fields driving inflation, gμν the space-time metric, Rμν the Ricci tensor, and n>0. We consider the so-called α-attractor models in two formulations of gravity: in the usual metric case where Rμν=Rμν(gμν), and in the Palatini formulation where Rμν is an independent variable. As the main result, we show that, regardless of the underlying theory of gravity, the field-space curvature in the Einstein frame has no influence on the inflationary dynamics at the limit of large ξI, and one effectively retains the single-field case. However, the gravity formulation does play an important role: in the metric case the result means that multifield models approach the single-field α-attractor limit, whereas in the Palatini case the attractor behaviour is lost also in the case of multifield inflation. We discuss what this means for distinguishing between different models of inflation.

  8. Threshold resummation S factor in QCD: The case of unequal masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solovtsova, O. P., E-mail: olsol@theor.jinr.r; Chernichenko, Yu. D., E-mail: chern@gstu.gomel.b

    A new relativistic Coulomb-like threshold resummation S factor in quantum chromodynamics is obtained. The analysis in question is performed within the quantum-field-theory quasipotential approach formulated in the relativistic configuration representation for the case of interaction between two relativistic particles that have unequal masses.

  9. Collaborative Textbook Selection: A Case Study Leading to Practical and Theoretical Considerations

    ERIC Educational Resources Information Center

    Czerwionka, Lori; Gorokhovsky, Bridget

    2015-01-01

    This case study developed a collaborative approach to the selection of a Spanish language textbook. The collaborative process consisted of six steps, detailed in this article: team building, generating evaluation criteria, formulating a meaningful rubric, selecting prospective textbooks, calculating rubric results, and reflectively reviewing…

  10. A three-dimensional meso-macroscopic model for Li-Ion intercalation batteries

    DOE PAGES

    Allu, S.; Kalnaus, S.; Simunovic, S.; ...

    2016-06-09

    Through this study, we present a three-dimensional computational formulation for electrode-electrolyte-electrode system of Li-Ion batteries. The physical consistency between electrical, thermal and chemical equations is enforced at each time increment by driving the residual of the resulting coupled system of nonlinear equations to zero. The formulation utilizes a rigorous volume averaging approach typical of multiphase formulations used in other fields and recently extended to modeling of supercapacitors [1]. Unlike existing battery modeling methods which use segregated solution of conservation equations and idealized geometries, our unified approach can model arbitrary battery and electrode configurations. The consistency of multi-physics solution also allowsmore » for consideration of a wide array of initial conditions and load cases. The formulation accounts for spatio-temporal variations of material and state properties such as electrode/void volume fractions and anisotropic conductivities. The governing differential equations are discretized using the finite element method and solved using a nonlinearly consistent approach that provides robust stability and convergence. The new formulation was validated for standard Li-ion cells and compared against experiments. Finally, its scope and ability to capture spatio-temporal variations of potential and lithium distribution is demonstrated on a prototypical three-dimensional electrode problem.« less

  11. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  12. Understanding and Treating Health Anxiety: A Cognitive-Behavioral Approach

    ERIC Educational Resources Information Center

    Taylor, Steven

    2004-01-01

    Mrs. A. presents with a textbook case of hypochondriasis. An additional diagnosis of OCD does not enhance our understanding or treatment of her problems, and is not indicated according to "DSM-IV." Cognitive-behavior therapy (CBT) is effective in treating hypochondriasis, although it is necessary to devise a case formulation for each patient to…

  13. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.

    2008-01-01

    Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.

  14. Evaluating the Properties of Poly(lactic-co-glycolic acid) Nanoparticle Formulations Encapsulating a Hydrophobic Drug by Using the Quality by Design Approach.

    PubMed

    Kozaki, Masato; Kobayashi, Shin-Ichiro; Goda, Yukihiro; Okuda, Haruhiro; Sakai-Kato, Kumiko

    2017-01-01

    We applied the Quality by Design (QbD) approach to the development of poly(lactic-co-glycolic acid) (PLGA) nanoparticle formulations encapsulating triamcinolone acetonide, and the critical process parameters (CPPs) were identified to clarify the correlations between critical quality attributes and CPPs. Quality risk management was performed by using an Ishikawa diagram and experiments with a fractional factorial design (ANOVA). The CPPs for particle size were PLGA concentration and rotation speed, and the CPP for relative drug loading efficiency was the poor solvent to good solvent volume ratio. By assessing the mutually related factors in the form of ratios, many factors could be efficiently considered in the risk assessment. We found a two-factor interaction between rotation speed and rate of addition of good solvent by using a fractional factorial design with resolution V. The system was then extended by using a central composite design, and the results obtained were visualized by using the response surface method to construct a design space. Our research represents a case study of the application of the QbD approach to pharmaceutical development, including formulation screening, by taking actual production factors into consideration. Our findings support the feasibility of using a similar approach to nanoparticle formulations under development. We could establish an efficient method of analyzing the CPPs of PLGA nanoparticles by using a QbD approach.

  15. Formulation of the relativistic moment implicit particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noguchi, Koichi; Tronci, Cesare; Zuccaro, Gianluca

    2007-04-15

    A new formulation is presented for the implicit moment method applied to the time-dependent relativistic Vlasov-Maxwell system. The new approach is based on a specific formulation of the implicit moment method that allows us to retain the same formalism that is valid in the classical case despite the formidable complication introduced by the nonlinear nature of the relativistic equations of motion. To demonstrate the validity of the new formulation, an implicit finite difference algorithm is developed to solve the Maxwell's equations and equations of motion. A number of benchmark problems are run: two stream instability, ion acoustic wave damping, Weibelmore » instability, and Poynting flux acceleration. The numerical results are all in agreement with analytical solutions.« less

  16. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun

    2007-01-01

    The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.

  17. Projective formulation of Maggi's method for nonholonomic systems analysis

    NASA Astrophysics Data System (ADS)

    Blajer, Wojciech

    1992-04-01

    A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.

  18. General theory of multistage geminate reactions of isolated pairs of reactants. I. Kinetic equations.

    PubMed

    Doktorov, Alexander B; Kipriyanov, Alexey A

    2014-05-14

    General matrix approach to the consideration of multistage geminate reactions of isolated pairs of reactants depending on reactant mobility is formulated on the basis of the concept of "effective" particles. Various elementary reactions (stages of multistage reaction including physicochemical processes of internal quantum state changes) proceeding with the participation of isolated pairs of reactants (or isolated reactants) are taken into account. Investigation has been made in terms of kinetic approach implying the derivation of general (matrix) kinetic equations for local and mean probabilities of finding any of the reaction species in the sample under study (or for local and mean concentrations). The recipes for the calculation of kinetic coefficients of the equations for mean quantities in terms of relative coordinates of reactants have been formulated in the general case of inhomogeneous reacting systems. Important specific case of homogeneous reacting systems is considered.

  19. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  20. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  1. Toward Integrating Environmental and Economic Education: Lessons from the U.S. Acid Rain Program

    ERIC Educational Resources Information Center

    Ellerbrock, Michael J.; Regn, Ann M.

    2004-01-01

    This field report presents an actual case study which illustrates that the natural and social sciences, in this case ecology and economics, can and should be integrated in environmental education and the formulation of public policy. After outlining basic economic approaches for addressing environmental problems, we focus on the process and…

  2. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  3. MONITORING, ASSESSMENT, AND ENVIRONMENTAL POLICY

    EPA Science Inventory

    This overview chapter examines the roles that environmental monitoring and assessment can play in the development of environmental policy. It takes a case study approach, focusing on the key roles played by monitoring data in policy formulation in acid deposition, stratospheric...

  4. Cosmic acceleration in the nonlocal approach to the cosmological constant problem

    NASA Astrophysics Data System (ADS)

    Oda, Ichiro

    2018-04-01

    We have recently constructed a manifestly local formulation of a nonlocal approach to the cosmological constant problem which can treat with quantum effects from both matter and gravitational fields. In this formulation, it has been explicitly shown that the effective cosmological constant is radiatively stable even in the presence of the gravitational loop effects. Since we are naturally led to add the R^2 term and the corresponding topological action to an original action, we make use of this formulation to account for the late-time acceleration of expansion of the universe in case of the open universes with infinite space-time volume. We will see that when the "scalaron", which exists in the R^2 gravity as an extra scalar field, has a tiny mass of the order of magnitude {O}(1 meV), we can explain the current value of the cosmological constant in a consistent manner.

  5. New results in the theory of dust grain alignment

    NASA Technical Reports Server (NTRS)

    Cugnon, Pierre

    1989-01-01

    Two complementary approaches are used in an attempt to propose an appropriate formulation of the solution to the problem of magnetic alignment of grains in the diffuse and/or the more denser clouds, whatever the mechanism of rotational excitation can be. The interest of such a unified formulation is mainly that the same theoretical expression for polarization can be used everywhere, allowing for easier comparisons between regions where the physical conditions are highly different. The first consists in applying a Monte-Carlo method to a limited number of representative cases, for which all the torques acting on the grain are taken into account: impulsive random torques due to direct collisions with gas atoms, to evaporation of atoms from the surface, and to exo-energetic recombinations forming hydrogen molecules, followed by violent ejections from peculiar sites; magnetic torques. Three characteristic times are associated with these torques: the collisional damping time, the time necessary to change completely the actual sites configuration narrowly bound to the correlation time of the suprathermal torque; and the magnetic damping time. The second approach starts from a heuristic point of view. It consists in a generalization of results (Cugnon, 1983; see also Purcell and Spitzer, 1971; Greenberg, 1978) obtained for thermal alignment to the suprathermal case. It appears indeed that in two extreme cases, the thermal formulation may be used after redefinition of involved times and temperatures.

  6. Suggestions for Formulating Collaborative Remote Sensing Emergency Plan Based on Case Studies

    NASA Astrophysics Data System (ADS)

    Liu, B.; Wang, F.; Zheng, X.; Qi, M.

    2017-09-01

    With the rapid development of the Remote Sensing (RS) technology, Remote Sensing Services for Emergency Monitoring (RSSEM) are playing a more and more important role in the field of emergency management, where the collaborative RS approaches (including such as Space-Air-Ground platforms) can provide the decision-makers a quick access to the detailed, real-time information about the emergencies. However, there are still some problems in the current mechanism of RSSEM, for example, the inappropriate choices of the collaborative RS approaches, the miscellaneous procedures and so on. It is urgent to formulate a collaborative RS emergency plan for regulating the applications of the RS monitoring approaches in order to be well prepared for the emergency management. In our studies, creating a good collaborative RS emergency plan is the main research objective. This paper is divided into four parts. The Part Ⅰ gives a brief introduction about the research background. The Part Ⅱ investigates four case studies to analyze the applications of the RS technologies under the guidance of the available RS related emergency plans, and then points out the existing problems in the mechanism of the RSSEM. The Part Ⅲ proposes our suggestions for formulating the collaborative RS emergency plan to explore the countermeasures of the problems pointed out in the Part Ⅱ. The last part concludes this paper and discusses the future work of the collaborative RS emergency plan.

  7. Cyclodextrin-water soluble polymer ternary complexes enhance the solubility and dissolution behaviour of poorly soluble drugs. Case example: itraconazole.

    PubMed

    Taupitz, Thomas; Dressman, Jennifer B; Buchanan, Charles M; Klein, Sandra

    2013-04-01

    The aim of the present series of experiments was to improve the solubility and dissolution/precipitation behaviour of a poorly soluble, weakly basic drug, using itraconazole as a case example. Binary inclusion complexes of itraconazole with two commonly used cyclodextrin derivatives and a recently introduced cyclodextrin derivative were prepared. Their solubility and dissolution behaviour was compared with that of the pure drug and the marketed formulation Sporanox®. Ternary complexes were prepared by addition of Soluplus®, a new highly water soluble polymer, during the formation of the itraconazole/cyclodextrin complex. A solid dispersion made of itraconazole and Soluplus® was also studied as a control. Solid state analysis was performed for all formulations and for pure itraconazole using powder X-ray diffraction (pX-RD) and differential scanning calorimetry (DSC). Solubility tests indicated that with all formulation approaches, the aqueous solubility of itraconazole formed with hydroxypropyl-β-cyclodextrin (HP-β-CD) or hydroxybutenyl-β-cyclodextrin (HBen-β-CD) and Soluplus® proved to be the most favourable formulation approaches. Whereas the marketed formulation and the pure drug showed very poor dissolution, both of these ternary inclusion complexes resulted in fast and extensive release of itraconazole in all test media. Using the results of the dissolution experiments, a newly developed physiologically based pharmacokinetic (PBPK) in silico model was applied to compare the in vivo behaviour of Sporanox® with the predicted performance of the most promising ternary complexes from the in vitro studies. The PBPK modelling predicted that the bioavailability of itraconazole is likely to be increased after oral administration of ternary complex formulations, especially when itraconazole is formulated as a ternary complex comprising HP-β-CD or HBen-β-CD and Soluplus®. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Formulation of a danazol cocrystal with controlled supersaturation plays an essential role in improving bioavailability.

    PubMed

    Childs, Scott L; Kandi, Praveen; Lingireddy, Sreenivas Reddy

    2013-08-05

    Cocrystals have become an established and adopted approach for creating crystalline solids with improved physical properties, but incorporating cocrystals into enabling pre-clinical formulations suitable for animal dosing has received limited attention. The dominant approach to in vivo evaluation of cocrystals has focused on deliberately excluding additional formulation in favor of "neat" aqueous suspensions of cocrystals or loading neat cocrystal material into capsules. However, this study demonstrates that, in order to take advantage of the improved solubility of a 1:1 danazol:vanillin cocrystal, a suitable formulation was required. The neat aqueous suspension of the danazol:vanillin cocrystal had a modest in vivo improvement of 1.7 times higher area under the curve compared to the poorly soluble crystal form of danazol dosed under identical conditions, but the formulated aqueous suspension containing 1% vitamin E-TPGS (TPGS) and 2% Klucel LF Pharm hydroxypropylcellulose improved the bioavailability of the cocrystal by over 10 times compared to the poorly soluble danazol polymorph. In vitro powder dissolution data obtained under non-sink biorelevant conditions correlate with in vivo data in rats following 20 mg/kg doses of danazol. In the case of the danazol:vanillin cocrystal, using a combination of cocrystal, solubilizer, and precipitation inhibitor in a designed supersaturating drug delivery system resulted in a dramatic improvement in the bioavailability. When suspensions of neat cocrystal material fail to return the anticipated bioavailability increase, a supersaturating formulation may be able to create the conditions required for the increased cocrystal solubility to be translated into improved in vivo absorption at levels competitive with existing formulation approaches used to overcome solubility limited bioavailability.

  9. Probabilistically Perfect Cloning of Two Pure States: Geometric Approach.

    PubMed

    Yerokhin, V; Shehu, A; Feldman, E; Bagan, E; Bergou, J A

    2016-05-20

    We solve the long-standing problem of making n perfect clones from m copies of one of two known pure states with minimum failure probability in the general case where the known states have arbitrary a priori probabilities. The solution emerges from a geometric formulation of the problem. This formulation reveals that cloning converges to state discrimination followed by state preparation as the number of clones goes to infinity. The convergence exhibits a phenomenon analogous to a second-order symmetry-breaking phase transition.

  10. Community Creation by Means of a Social Media Paradigm

    ERIC Educational Resources Information Center

    Fernando, Isuru

    2010-01-01

    Purpose: The purpose of this paper is to present a case study from which a framework for the purposeful building of knowledge communities by means of social media is formulated. Design/methodology/approach: The approach first takes the form of a literature review. Based on a review of literature as well as on various data sets and surveys within…

  11. Feynman-like rules for calculating n-point correlators of the primordial curvature perturbation

    NASA Astrophysics Data System (ADS)

    Valenzuela-Toledo, César A.; Rodríguez, Yeinzon; Beltrán Almeida, Juan P.

    2011-10-01

    A diagrammatic approach to calculate n-point correlators of the primordial curvature perturbation ζ was developed a few years ago following the spirit of the Feynman rules in Quantum Field Theory. The methodology is very useful and time-saving, as it is for the case of the Feynman rules in the particle physics context, but, unfortunately, is not very well known by the cosmology community. In the present work, we extend such an approach in order to include not only scalar field perturbations as the generators of ζ, but also vector field perturbations. The purpose is twofold: first, we would like the diagrammatic approach (which we would call the Feynman-like rules) to become widespread among the cosmology community; second, we intend to give an easy tool to formulate any correlator of ζ for those cases that involve vector field perturbations and that, therefore, may generate prolonged stages of anisotropic expansion and/or important levels of statistical anisotropy. Indeed, the usual way of formulating such correlators, using the Wick's theorem, may become very clutter and time-consuming.

  12. Conceptual Design of Low-Boom Aircraft with Flight Trim Requirement

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Geiselhart, Karl A.; Fenbert, James W.

    2014-01-01

    A new low-boom target generation approach is presented which allows the introduction of a trim requirement during the early conceptual design of supersonic aircraft. The formulation provides an approximation of the center of pressure for a presumed aircraft configuration with a reversed equivalent area matching a low-boom equivalent area target. The center of pressure is approximated from a surrogate lift distribution that is based on the lift component of the classical equivalent area. The assumptions of the formulation are verified to be sufficiently accurate for a supersonic aircraft of high fineness ratio through three case studies. The first two quantify and verify the accuracy and the sensitivity of the surrogate center of pressure corresponding to shape deformation of lifting components. The third verification case shows the capability of the approach to achieve a trim state while maintaining the low-boom characteristics of a previously untrimmed configuration. Finally, the new low-boom target generation approach is demonstrated through the early conceptual design of a demonstrator concept that is low-boom feasible, trimmed, and stable in cruise.

  13. Approaches to assessment in time-limited Mentalization-Based Therapy for Children (MBT-C)

    PubMed Central

    Muller, Nicole; Midgley, Nick

    2015-01-01

    In this article we describe our clinical approach to assessment, formulation and the identification of a therapeutic focus in the context of time-limited Mentalization-Based Treatment for Children (MBT-C) aged between 6 and 12. Rather than seeing the capacity to mentalize as a global construct, we set out an approach to assessing the developmental ‘building blocks’ of the capacity to mentalize the self and others, including the capacity for attention regulation, emotion regulation, and explicit mentalization. Assessing the child’s strengths and vulnerabilities in each of these domains provides a more nuanced picture of the child’s mentalizing capacities and difficulties, and can provide a useful approach to case formulation. The article sets out an approach to assessment that includes a consideration of mentalizing strengths and difficulties in both the child and the parents, and shows how this can be used to help develop a mutually agreed treatment focus. A clinical vignette illustrates the approach taken to assessment and connects it to routine clinical practice. PMID:26283994

  14. Model‐Informed Development and Registration of a Once‐Daily Regimen of Extended‐Release Tofacitinib

    PubMed Central

    Lamba, M; Hutmacher, MM; Furst, DE; Dikranian, A; Dowty, ME; Conrado, D; Stock, T; Nduaka, C; Cook, J

    2017-01-01

    Extended‐release (XR) formulations enable less frequent dosing vs. conventional (e.g., immediate release (IR)) formulations. Regulatory registration of such formulations typically requires pharmacokinetic (PK) and clinical efficacy data. Here we illustrate a model‐informed, exposure–response (E‐R) approach to translate controlled trial data from one formulation to another without a phase III trial, using a tofacitinib case study. Tofacitinib is an oral Janus kinase (JAK) inhibitor for the treatment of rheumatoid arthritis (RA). E‐R analyses were conducted using validated clinical endpoints from phase II dose–response and nonclinical dose fractionation studies of the IR formulation. Consistent with the delay in clinical response dynamics relative to PK, average concentration was established as the relevant PK parameter for tofacitinib efficacy and supported pharmacodynamic similarity. These evaluations, alongside demonstrated equivalence in total systemic exposure between IR and XR formulations, provided the basis for the regulatory approval of tofacitinib XR once daily by the US Food and Drug Administration. PMID:27859030

  15. Unification of some advection schemes in two dimensions

    NASA Technical Reports Server (NTRS)

    Sidilkover, D.; Roe, P. L.

    1995-01-01

    The relationship between two approaches towards construction of genuinely two-dimensional upwind advection schemes is established. One of these approaches is of the control volume type applicable on structured cartesian meshes. It resulted in the compact high resolution schemes capable of maintaining second order accuracy in both homogeneous and inhomogeneous cases. Another one is the fluctuation splitting approach, which is well suited for triangular (and possibly) unstructured meshes. Understanding the relationship between these two approaches allows us to formulate here a new fluctuation splitting high resolution (i.e. possible use of artificial compression, while maintaining positivity property) scheme. This scheme is shown to be linearity preserving in inhomogeneous as well as homogeneous cases.

  16. Enhancement of the efficacy of therapeutic proteins by formulation with PEGylated liposomes; a case of FVIII, FVIIa and G-CSF.

    PubMed

    Yatuv, Rivka; Robinson, Micah; Dayan, Inbal; Baru, Moshe

    2010-02-01

    Improving the pharmacodynamics of protein drugs has the potential to improve the care and the quality of life of patients suffering from a variety of diseases. Four approaches to improve protein drugs are described: PEGylation, amino acid substitution, fusion to carrier proteins and encapsulation. A new platform technology based on the binding of proteins/peptides to the outer surface of PEGylated liposomes (PEGLip) is then presented. Binding of proteins to PEGLip is non-covalent, highly specific and dependent on an amino acid consensus sequence within the proteins. Association of proteins with PEGLip results in substantial enhancement of the pharmacodynamic properties of proteins following administration. This has been demonstrated in preclinical studies and clinical trials with coagulation factors VIII and VIIa. It has also been demonstrated in preclinical studies with granulocyte colony-stimulating factor. A mechanism is presented that explains the improvements in hemostatic efficacy of PEGLip-formulated coagulation factors VIII and VIIa. The reader will gain an understanding of the advantages and disadvantages of each of the approaches discussed. PEGLip formulation is an important new approach to improve the pharmacodynamics of protein drugs. This approach may be applied to further therapeutic proteins in the future.

  17. Microenvironmental pH-modification to improve dissolution behavior and oral absorption for drugs with pH-dependent solubility.

    PubMed

    Taniguchi, Chika; Kawabata, Yohei; Wada, Koichi; Yamada, Shizuo; Onoue, Satomi

    2014-04-01

    Drug release and oral absorption of drugs with pH-dependent solubility are influenced by the conditions in the gastrointestinal tract. In some cases, poor oral absorption has been observed for these drugs, causing insufficient drug efficacy. The pH-modification of a formulation could be a promising approach to overcome the poor oral absorption of drugs with pH-dependent solubility. The present review aims to summarize the pH-modifier approach and strategic analyses of microenvironmental pH for formulation design and development. We also provide literature- and patent-based examples of the application of pH-modification technology to solid dosage forms. For the pH-modification approach, the microenvironmental pH at the diffusion area can be altered by dissolving pH-modifying excipients in the formulation. The modulation of the microenvironmental pH could improve dissolution behavior of drugs with pH-dependent solubility, possibly leading to better oral absorption. According to this concept, the modulated level of microenvironmental pH and its duration can be key factors for improvement in drug dissolution. The measurement of microenvironmental pH and release of pH-modifier would provide theoretical insight for the selection of an appropriate pH-modifier and optimization of the formulation.

  18. The SEURAT-1 approach towards animal free human safety assessment.

    PubMed

    Gocht, Tilman; Berggren, Elisabet; Ahr, Hans Jürgen; Cotgreave, Ian; Cronin, Mark T D; Daston, George; Hardy, Barry; Heinzle, Elmar; Hescheler, Jürgen; Knight, Derek J; Mahony, Catherine; Peschanski, Marc; Schwarz, Michael; Thomas, Russell S; Verfaillie, Catherine; White, Andrew; Whelan, Maurice

    2015-01-01

    SEURAT-1 is a European public-private research consortium that is working towards animal-free testing of chemical compounds and the highest level of consumer protection. A research strategy was formulated based on the guiding principle to adopt a toxicological mode-of-action framework to describe how any substance may adversely affect human health.The proof of the initiative will be in demonstrating the applicability of the concepts on which SEURAT-1 is built on three levels:(i) Theoretical prototypes for adverse outcome pathways are formulated based on knowledge already available in the scientific literature on investigating the toxicological mode-of-actions leading to adverse outcomes (addressing mainly liver toxicity);(ii)adverse outcome pathway descriptions are used as a guide for the formulation of case studies to further elucidate the theoretical model and to develop integrated testing strategies for the prediction of certain toxicological effects (i.e., those related to the adverse outcome pathway descriptions);(iii) further case studies target the application of knowledge gained within SEURAT-1 in the context of safety assessment. The ultimate goal would be to perform ab initio predictions based on a complete understanding of toxicological mechanisms. In the near-term, it is more realistic that data from innovative testing methods will support read-across arguments. Both scenarios are addressed with case studies for improved safety assessment. A conceptual framework for a rational integrated assessment strategy emerged from designing the case studies and is discussed in the context of international developments focusing on alternative approaches for evaluating chemicals using the new 21st century tools for toxicity testing.

  19. A comprehensive approach to reactive power scheduling in restructured power systems

    NASA Astrophysics Data System (ADS)

    Shukla, Meera

    Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.

  20. Quality by design case study 1: Design of 5-fluorouracil loaded lipid nanoparticles by the W/O/W double emulsion - Solvent evaporation method.

    PubMed

    Amasya, Gulin; Badilli, Ulya; Aksu, Buket; Tarimci, Nilufer

    2016-03-10

    With Quality by Design (QbD), a systematic approach involving design and development of all production processes to achieve the final product with a predetermined quality, you work within a design space that determines the critical formulation and process parameters. Verification of the quality of the final product is no longer necessary. In the current study, the QbD approach was used in the preparation of lipid nanoparticle formulations to improve skin penetration of 5-Fluorouracil, a widely-used compound for treating non-melanoma skin cancer. 5-Fluorouracil-loaded lipid nanoparticles were prepared by the W/O/W double emulsion - solvent evaporation method. Artificial neural network software was used to evaluate the data obtained from the lipid nanoparticle formulations, to establish the design space, and to optimize the formulations. Two different artificial neural network models were developed. The limit values of the design space of the inputs and outputs obtained by both models were found to be within the knowledge space. The optimal formulations recommended by the models were prepared and the critical quality attributes belonging to those formulations were assigned. The experimental results remained within the design space limit values. Consequently, optimal formulations with the critical quality attributes determined to achieve the Quality Target Product Profile were successfully obtained within the design space by following the QbD steps. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Symmetry and conservation laws in semiclassical wave packet dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohsawa, Tomoki, E-mail: tomoki@utdallas.edu

    2015-03-15

    We formulate symmetries in semiclassical Gaussian wave packet dynamics and find the corresponding conserved quantities, particularly the semiclassical angular momentum, via Noether’s theorem. We consider two slightly different formulations of Gaussian wave packet dynamics; one is based on earlier works of Heller and Hagedorn and the other based on the symplectic-geometric approach by Lubich and others. In either case, we reveal the symplectic and Hamiltonian nature of the dynamics and formulate natural symmetry group actions in the setting to derive the corresponding conserved quantities (momentum maps). The semiclassical angular momentum inherits the essential properties of the classical angular momentum asmore » well as naturally corresponds to the quantum picture.« less

  2. Experience with an extended-release opioid formulation designed to reduce abuse liability in a community-based pain management clinic

    PubMed Central

    Rubino, Daniel

    2011-01-01

    Context With the growing public health concern over rising rates of opioid abuse, physicians have a responsibility to incorporate safeguards into their practice to minimize the potential for opioid misuse, abuse, and diversion. Patient-specific treatment regimens should include steps to monitor treatment success with regard to optimal pain management as well as inappropriate use of opioids and other substances. Opioid formulations designed to be less attractive for abuse are also being developed. While future studies are needed to determine the impact of such formulations in addressing the issue of opioid misuse in the community as a whole, the experience of practitioners who have utilized these formulations can highlight the practical steps to incorporate such formulations into the everyday patient-care setting. Purpose The purpose of this report is to describe experience in managing patients with chronic, moderate-to-severe pain using morphine sulfate and naltrexone hydrochloride extended release capsules (MS-sNT) (EMBEDA®, King Pharmaceuticals® Inc, Bristol, TN, which was acquired by Pfizer Inc, New York, NY, in March 2011), a formulation designed with features to deter abuse/misuse, in a community-based pain management clinic. Case presentations Case reports demonstrating a clinical management plan for assessment, initial interview procedures, explanation/discussion of proposed therapies, patients’ treatment goals, conversion to MS-sNT, and titration and treatment outcomes are provided. Results The management approach yielded successful outcomes including pain relief, improved quality of life, treatment satisfaction, and patient acceptance of a formulation designed to deter abuse/misuse. Discussion The cases presented demonstrate that the communication accompanying complete pretreatment assessment, goal-setting and expectations, and attention to individual patient needs can enable optimization of pain-related outcomes, resulting in improved quality of life for patients and fostering patient acceptance of formulations designed to help address opioid abuse/misuse issues in the community at large. PMID:22069367

  3. Design principles of water sensitive in settlement area on the river banks

    NASA Astrophysics Data System (ADS)

    Ryanti, E.; Hasriyanti, N.; Utami, W. D.

    2018-03-01

    This research will formulate the principle of designing settlement area of Kapuas River Pontianak with the approach of water sensitive concept of urban design (WSUD) the densely populated settlement area. By using a case study the approach that is a dense settlement area located on the banks of the river with literature study techniques to formulate the aspects considered and components that are set in the design, descriptive analysis with the rationalistic paradigm for identification characteristics of the settlement in the river banks areas with consideration of WSUD elements and formulate the principles of designing water-sensitive settlement areas. This research is important to do because the problems related to the water management system in the existing riverside settlement in Pontianak has not been maximal to do. So the primary of this research contains several objectives that will be achieved that is identifying the characteristics of riverside settlement area based on consideration of design aspects of the area that are sensitive to water and the principle of designing the area so that the existing problem structure will be formulated in relation to the community’s need for infrastructure in settlement environment and formulate and develop appropriate technology guidelines for integrated water management systems in riverside settlement areas and design techniques for water-sensitive settlements (WSUD).

  4. Mechanics of cantilever beam: Implementation and comparison of FEM and MLPG approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trobec, Roman

    2016-06-08

    Two weak form solution approaches for partial differential equations, the well known meshbased finite element method and the newer meshless local Petrov Galerkin method are described and compared on a standard test case - mechanics of cantilever beam. The implementation, solution accuracy and calculation complexity are addressed for both approaches. We found out that FEM is superior in most standard criteria, but MLPG has some advantages because of its flexibility that results from its general formulation.

  5. A dynamic feedback-control toll pricing methodology : a case study on Interstate 95 managed lanes.

    DOT National Transportation Integrated Search

    2013-06-01

    Recently, congestion pricing emerged as a cost-effective and efficient strategy to mitigate the congestion problem on freeways. This study develops a feedback-control based dynamic toll approach to formulate and solve for optimal tolls. The study com...

  6. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  7. Fluid-Structure Interaction Simulation of Prosthetic Aortic Valves: Comparison between Immersed Boundary and Arbitrary Lagrangian-Eulerian Techniques for the Mesh Representation

    PubMed Central

    Iannaccone, Francesco; Degroote, Joris; Vierendeels, Jan; Segers, Patrick

    2016-01-01

    In recent years the role of FSI (fluid-structure interaction) simulations in the analysis of the fluid-mechanics of heart valves is becoming more and more important, being able to capture the interaction between the blood and both the surrounding biological tissues and the valve itself. When setting up an FSI simulation, several choices have to be made to select the most suitable approach for the case of interest: in particular, to simulate flexible leaflet cardiac valves, the type of discretization of the fluid domain is crucial, which can be described with an ALE (Arbitrary Lagrangian-Eulerian) or an Eulerian formulation. The majority of the reported 3D heart valve FSI simulations are performed with the Eulerian formulation, allowing for large deformations of the domains without compromising the quality of the fluid grid. Nevertheless, it is known that the ALE-FSI approach guarantees more accurate results at the interface between the solid and the fluid. The goal of this paper is to describe the same aortic valve model in the two cases, comparing the performances of an ALE-based FSI solution and an Eulerian-based FSI approach. After a first simplified 2D case, the aortic geometry was considered in a full 3D set-up. The model was kept as similar as possible in the two settings, to better compare the simulations’ outcomes. Although for the 2D case the differences were unsubstantial, in our experience the performance of a full 3D ALE-FSI simulation was significantly limited by the technical problems and requirements inherent to the ALE formulation, mainly related to the mesh motion and deformation of the fluid domain. As a secondary outcome of this work, it is important to point out that the choice of the solver also influenced the reliability of the final results. PMID:27128798

  8. Development of an abiraterone acetate formulation with improved oral bioavailability guided by absorption modeling based on in vitro dissolution and permeability measurements.

    PubMed

    Solymosi, Tamás; Ötvös, Zsolt; Angi, Réka; Ordasi, Betti; Jordán, Tamás; Semsey, Sándor; Molnár, László; Ránky, Soma; Filipcsei, Genovéva; Heltovics, Gábor; Glavinas, Hristos

    2017-10-30

    Particle size reduction of drug crystals in the presence of surfactants (often called "top-down" production methods) is a standard approach used in the pharmaceutical industry to improve bioavailability of poorly soluble drugs. Based on the mathematical model used to predict the fraction dose absorbed this formulation approach is successful when dissolution rate is the main rate limiting factor of oral absorption. In case compound solubility is also a major factor this approach might not result in an adequate improvement in bioavailability. Abiraterone acetate is poorly water soluble which is believed to be responsible for its very low bioavailability in the fasted state and its significant positive food effect. In this work, we have successfully used in vitro dissolution, solubility and permeability measurements in biorelevant media to describe the dissolution characteristics of different abiraterone acetate formulations. Mathematical modeling of fraction dose absorbed indicated that reducing the particle size of the drug cannot be expected to result in significant improvement in bioavailability in the fasted state. In the fed state, the same formulation approach can result in a nearly complete absorption of the dose; thereby, further increasing the food effect. Using a "bottom-up" formulation method we improved both the dissolution rate and the apparent solubility of the compound. In beagle dog studies, this resulted in a ≫>10-fold increase in bioavailability in the fasted state when compared to the marketed drug and the elimination of the food effect. Calculated values of fraction dose absorbed were in agreement with the observed relative bioavailability values in beagle dogs. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  10. The analysis of delays in simulator digital computing systems. Volume 1: Formulation of an analysis approach using a central example simulator model

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.

    1980-01-01

    The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.

  11. On the identifiability of inertia parameters of planar Multi-Body Space Systems

    NASA Astrophysics Data System (ADS)

    Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2018-04-01

    This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.

  12. Unpacking policy formulation and industry influence: the case of the draft control of marketing of alcoholic beverages bill in South Africa.

    PubMed

    Bertscher, Adam; London, Leslie; Orgill, Marsha

    2018-06-21

    Alcohol is a major contributor to the Non-Communicable Disease burden in South Africa. In 2000, 7.1% of all deaths and 7% of total disability-adjusted life years were ascribed to alcohol-related harm in the country. Regulations proposed to restrict alcohol advertising in South Africa present an evidence-based upstream intervention. Research on policy formulation in low- and middle-income countries is limited. This study aims to describe and explore the policy formulation process of the 2013 draft Control of Marketing of Alcoholic Beverages Bill in South Africa between March 2011 and May 2017. Recognising the centrality of affected actors in policy-making processes, the study focused on the alcohol industry as a central actor affected by the policy, to understand how they-together with other actors-may influence the policy formulation process. A qualitative case study approach was used, involving a stakeholder mapping, 10 in-depth interviews, and review of approximately 240 documents. A policy formulation conceptual framework was successfully applied as a lens to describe a complex policy formulation process. Key factors shaping policy formulation included: (1) competing and shared values-different stakeholders promote conflicting ideals for policymaking; (2) inter-department jostling-different government departments seek to protect their own functions, hindering policy development; (3) stakeholder consultation in democratic policymaking-policy formulation requires consultations even with those opposed to regulation and (4) battle for evidence-evidence is used strategically by all parties to shape perceptions and leverage positions. This research (1) contributes to building an integrated body of knowledge on policy formulation in low- and middle-income countries; (2) shows that achieving policy coherence across government departments poses a major challenge to achieving effective health policy formulation and (3) shows that networks of actors with commercial and financial interests use diverse strategies to influence policy formulation processes to avoid regulation.

  13. Blueprint for the development of low carbon society scenarios for Asian regions- case study of Iskandar Malaysia

    NASA Astrophysics Data System (ADS)

    Ho, C. S.; Matsuoka, Y.; Chau, L. W.; Teh, B. T.; Simson, J. J.; Gomi, K.

    2013-06-01

    Malaysian government aims to reduce 40% reduction of carbon emission intensity by the year 2020 using 2005 as the base year. Several mitigation and adaptation strategies in addressing environmental and climate change are formulated at national, regional and local level to mitigate greenhouse gases. This paper aims to examine local and regional resilient policy actions to reduce greenhouse gases using the empirical case of Iskandar Malaysia. The study case is selected because it is one of the fast developing economic corridor regions in Malaysia. In this study, a low carbon society blueprint is initiated to guide the rapid development of this economic corridor towards low carbon green growth. The blueprint provides the sustainable green growth roadmap with major 12 actions for the region. It is done through a bottom-up approach where stakeholder discussions are carried out to allow local communities participation in the plan formulation.

  14. Combinatorial approaches to gene recognition.

    PubMed

    Roytberg, M A; Astakhova, T V; Gelfand, M S

    1997-01-01

    Recognition of genes via exon assembly approaches leads naturally to the use of dynamic programming. We consider the general graph-theoretical formulation of the exon assembly problem and analyze in detail some specific variants: multicriterial optimization in the case of non-linear gene-scoring functions; context-dependent schemes for scoring exons and related procedures for exon filtering; and highly specific recognition of arbitrary gene segments, oligonucleotide probes and polymerase chain reaction (PCR) primers.

  15. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  16. An Extrusion Spheronization Approach to Enable a High Drug Load Formulation of a Poorly Soluble Drug with a Low Melting Surfactant.

    PubMed

    Tatavarti, Aditya; Kesisoglou, Filippos

    2015-11-01

    Vitamin E tocopherol polyethylene glycol succinate (TPGS) is a non-ionic surface active agent, known to enhance the bioavailability of lipophilic compounds via wettability, solubility, and in some cases permeability enhancement. MK-0536 is an anti-retroviral drug with poor wettability and solubility and a high dose. Based on pharmacokinetic studies in dogs and humans, use of vitamin E TPGS in oral solid formulations of MK-0536 provides desired PK characteristics. The use of vitamin E TPGS, however, in solid dosage forms is limited because of the processing challenges resulting from its waxy nature and low melting temperature (∼37°C). The current study, for the first time, demonstrates the use of an alternative low pressure extrusion and spheronization approach to enable high loadings of the poorly soluble, poorly compactable drug and relatively high levels of vitamin E TPGS. This approach not only aided in mitigating processing challenges arising from most high energy process steps such as milling, compression, and coating, but also enabled a higher drug load formulation that provided superior bioperformance relative to a conventional high shear wet granulated formulation. An encapsulated dosage form consisting of pellets prepared by extrusion spheronization with 75% (w/w) MK-0536 and 10% (w/w) vitamin E TPGS was developed. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. Aeroelastic coupling of geometrically nonlinear structures and linear unsteady aerodynamics: Two formulations

    NASA Astrophysics Data System (ADS)

    Demasi, L.; Livne, E.

    2009-07-01

    Two different time domain formulations of integrating commonly used frequency-domain unsteady aerodynamic models based on a modal approach with full order finite element models for structures with geometric nonlinearities are presented. Both approaches are tailored to flight vehicle configurations where geometric stiffness effects are important but where deformations are moderate, flow is attached, and linear unsteady aerodynamic modeling is adequate, such as low aspect ratio wings or joined-wing and strut-braced wings at small to moderate angles of attack. Results obtained using the two approaches are compared using both planar and non-planar wing configurations. Sub-critical and post-flutter speeds are considered. It is demonstrated that the two methods lead to the same steady solution for the sub-critical case after the transients subside. It is also shown that the two methods predict the amplitude and frequency of limit cycle oscillation (when present) with the same accuracy.

  18. The determination of the elastodynamic fields of an ellipsoidal inhomogeneity

    NASA Technical Reports Server (NTRS)

    Fu, L. S.; Mura, T.

    1983-01-01

    The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.

  19. REGIONAL ASSESSMENT OF FISH HEALTH: A PROTOTYPE METHODOLOGY AND CASE STUDY FOR THE ALBEMARLE-PAMLICO RIVER BASIN, NORTH CAROLINA

    EPA Science Inventory

    BASE (Basin-Scale Assessments for Sustainable Ecosystems) is a research program developed by the Ecosystems Research Division of the National Exposure Research Laboratory to explore and formulate approaches for assessing the sustainability of ecological resources within watershed...

  20. Developing Internationalisation Strategies, University of Winchester, UK

    ERIC Educational Resources Information Center

    Neale, Richard Hugh; Spark, Alasdair; Carter, Joy

    2018-01-01

    Purpose: Internationalisation has been a theme in UK higher education for a decade or more. The review of this paper, a practice-based case study, is to find how Winchester formulated two successive internationalisation strategies. Design/methodology/approach: The strategies were developed using a research-oriented method: grounded in the…

  1. Illustrative Case Using the RISK21 Roadmap and Matrix: Prioritization for Evaluation of Chemicals Found in Drinking Water

    EPA Science Inventory

    The HESI-led RISK21 effort has developed a framework supporting the use of twenty first century technology in obtaining and using information for chemical risk assessment. This framework represents a problem formulation-based, exposure-driven, tiered data acquisition approach tha...

  2. Functional renormalization-group approaches, one-particle (irreducible) reducible with respect to local Green’s functions, with dynamical mean-field theory as a starting point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katanin, A. A., E-mail: katanin@mail.ru

    We consider formulations of the functional renormalization-group (fRG) flow for correlated electronic systems with the dynamical mean-field theory as a starting point. We classify the corresponding renormalization-group schemes into those neglecting one-particle irreducible six-point vertices (with respect to the local Green’s functions) and neglecting one-particle reducible six-point vertices. The former class is represented by the recently introduced DMF{sup 2}RG approach [31], but also by the scale-dependent generalization of the one-particle irreducible representation (with respect to local Green’s functions, 1PI-LGF) of the generating functional [20]. The second class is represented by the fRG flow within the dual fermion approach [16, 32].more » We compare formulations of the fRG approach in each of these cases and suggest their further application to study 2D systems within the Hubbard model.« less

  3. A weight-of-evidence approach to identify nanomaterials in consumer products: a case study of nanoparticles in commercial sunscreens.

    PubMed

    Cuddy, Michael F; Poda, Aimee R; Moser, Robert D; Weiss, Charles A; Cairns, Carolyn; Steevens, Jeffery A

    2016-01-01

    Nanoscale ingredients in commercial products represent a point of emerging environmental concern due to recent findings that correlate toxicity with small particle size. A weight-of-evidence (WOE) approach based upon multiple lines of evidence (LOE) is developed here to assess nanomaterials as they exist in consumer product formulations, providing a qualitative assessment regarding the presence of nanomaterials, along with a baseline estimate of nanoparticle concentration if nanomaterials do exist. Electron microscopy, analytical separations, and X-ray detection methods were used to identify and characterize nanomaterials in sunscreen formulations. The WOE/LOE approach as applied to four commercial sunscreen products indicated that all four contained at least 10% dispersed primary particles having at least one dimension <100 nm in size. Analytical analyses confirmed that these constituents were comprised of zinc oxide (ZnO) or titanium dioxide (TiO2). The screening approaches developed herein offer a streamlined, facile means to identify potentially hazardous nanomaterial constituents with minimal abrasive processing of the raw material.

  4. The utility of case formulation in treatment decision making; the effect of experience and expertise.

    PubMed

    Dudley, Robert; Ingham, Barry; Sowerby, Katy; Freeston, Mark

    2015-09-01

    We examined whether case formulation guides the endorsement of appropriate treatment strategies. We also considered whether experience and training led to more effective treatment decisions. To examine these questions two related studies were conducted both of which used a novel paradigm using clinically relevant decision-making tasks with multiple sources of information. Study one examined how clinicians utilised a pre-constructed CBT case formulation to plan treatment. Study two utilised a clinician-generated formulation to further examine the process of formulation development and the impact on treatment planning. Both studies considered the effect of therapist experience. Both studies indicated that clinicians used the case formulation to select treatment choices that were highly matched to the case as described in the vignette. However, differences between experts and novice clinicians were only demonstrated when clinicians developed their own formulations of case material. When they developed their own formulations the experts' formulations were more parsimonious, internally consistent, and contained fewer errors and the experts were less swayed by irrelevant treatment options. The nature of the experimental task, involving ratings of suitability of possible treatment options suggested for the case, limits the interpretation that formulation directs the development or generation of the clinician's treatment plan. In study two the task may still have limited the capacity to demonstrate further differences between expert and novice therapists. Formulation helps guide certain aspects of effective treatment decision making. When asked to generate a formulation clinicians with greater experience and expertise do this more effectively. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  5. Case formulation in clinical practice: Associations with psychological mindedness, attachment and burnout in staff working with people experiencing psychosis.

    PubMed

    Hartley, Samantha; Jovanoska, Jelena; Roberts, Susan; Burden, Nicolas; Berry, Katherine

    2016-06-01

    Case formulation can impact on therapeutic relationships, staff understanding and outcomes, which might be particularly important when working with complex mental health problems such as psychosis. However, the evidence base is equivocal and there is insufficient understanding around the staff-related factors that influence effective psychological case formulation. This study investigated the influence of staff characteristics (both professional and personal) on case formulation skill. This was a cross-sectional study, with all of the measures collected at the same time point. Fifty staff members working on inpatient wards with individuals experiencing psychosis were recruited. Measures included independently rated case formulation skill and psychological mindedness (the ability to draw together aspects of thoughts, feelings and actions), both in relation to hypothetical cases. Self-report questionnaires assessed psychological mindedness, attachment styles, symptoms of burnout and professional qualifications. The preliminary analyses indicated that case formulation skill was associated with higher psychological mindedness (both self-reported and independently-rated) and lower levels of avoidant attachment styles. Simultaneous entry multiple regression demonstrated that the only independent predictor of case formulation skill was independently rated psychological mindedness. These findings highlight the factors that contribute to staff's ability to case formulate and the possibility for services to develop psychological mindedness and case formulation skills through formal training, alongside fostering a psychological minded working environment. Case formulation skill is positively associated with the personal ability (or inclination) to draw together aspects of experience in a psychological manner (i.e., psychological mindedness) It might also be important to consider avoidant attachment tendencies in relation to formulation skills The sample was relatively small and drawn from a limited number of services, which might reduce the generalizability of the findings Psychological mindedness might not be captured adequately by self-report tools and services may wish to employ more novel ways of assessing this important skill in staff groups (such as the speech sample used in the current study). © 2015 The British Psychological Society.

  6. Genetically modified crops and aquatic ecosystems: considerations for environmental risk assessment and non-target organism testing.

    PubMed

    Carstens, Keri; Anderson, Jennifer; Bachman, Pamela; De Schrijver, Adinda; Dively, Galen; Federici, Brian; Hamer, Mick; Gielkens, Marco; Jensen, Peter; Lamp, William; Rauschen, Stefan; Ridley, Geoff; Romeis, Jörg; Waggoner, Annabel

    2012-08-01

    Environmental risk assessments (ERA) support regulatory decisions for the commercial cultivation of genetically modified (GM) crops. The ERA for terrestrial agroecosystems is well-developed, whereas guidance for ERA of GM crops in aquatic ecosystems is not as well-defined. The purpose of this document is to demonstrate how comprehensive problem formulation can be used to develop a conceptual model and to identify potential exposure pathways, using Bacillus thuringiensis (Bt) maize as a case study. Within problem formulation, the insecticidal trait, the crop, the receiving environment, and protection goals were characterized, and a conceptual model was developed to identify routes through which aquatic organisms may be exposed to insecticidal proteins in maize tissue. Following a tiered approach for exposure assessment, worst-case exposures were estimated using standardized models, and factors mitigating exposure were described. Based on exposure estimates, shredders were identified as the functional group most likely to be exposed to insecticidal proteins. However, even using worst-case assumptions, the exposure of shredders to Bt maize was low and studies supporting the current risk assessments were deemed adequate. Determining if early tier toxicity studies are necessary to inform the risk assessment for a specific GM crop should be done on a case by case basis, and should be guided by thorough problem formulation and exposure assessment. The processes used to develop the Bt maize case study are intended to serve as a model for performing risk assessments on future traits and crops.

  7. Field-based optimal-design of an electric motor: a new sensitivity formulation

    NASA Astrophysics Data System (ADS)

    Barba, Paolo Di; Mognaschi, Maria Evelina; Lowther, David Alister; Wiak, Sławomir

    2017-12-01

    In this paper, a new approach to robust optimal design is proposed. The idea is to consider the sensitivity by means of two auxiliary criteria A and D, related to the magnitude and isotropy of the sensitivity, respectively. The optimal design of a switched-reluctance motor is considered as a case study: since the case study exhibits two design criteria, the relevant Pareto front is approximated by means of evolutionary computing.

  8. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  9. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  10. Robust operative diagnosis as problem solving in a hypothesis space

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.

    1988-01-01

    This paper describes an approach that formulates diagnosis of physical systems in operation as problem solving in a hypothesis space. Such a formulation increases robustness by: (1) incremental hypotheses construction via dynamic inputs, (2) reasoning at a higher level of abstraction to construct hypotheses, and (3) partitioning the space by grouping fault hypotheses according to the type of physical system representation and problem solving techniques used in their construction. It was implemented for a turbofan engine and hydraulic subsystem. Evaluation of the implementation on eight actual aircraft accident cases involving engine faults provided very promising results.

  11. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  12. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  13. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  14. A Geometric Puzzle That Leads To Fibonacci Sequences.

    ERIC Educational Resources Information Center

    Rulf, Benjamin

    1998-01-01

    Illustrates how mathematicians work and do mathematical research through the use of a puzzle. Demonstrates how general rules, then theorems develop from special cases. This approach may be used as a research project in high school classrooms or math club settings with the teacher helping to formulate questions, set goals, and avoid becoming…

  15. Principles for Public Funding of Workplace Learning. A Review To Identify Models of Workplace Learning & Funding Principles.

    ERIC Educational Resources Information Center

    Hawke, Geof; Mawer, Giselle; Connole, Helen; Solomon, Nicky

    Models of workplace learning and principles for funding workplace learning in Australia were identified through case studies and a literature review. A diverse array of workplace-based approaches to delivering nationally recognized qualifications were identified. The following were among the nine funding proposals formulated: (1) funding…

  16. Nonlattice simulation for supersymmetric gauge theories in one dimension.

    PubMed

    Hanada, Masanori; Nishimura, Jun; Takeuchi, Shingo

    2007-10-19

    Lattice simulation of supersymmetric gauge theories is not straightforward. In some cases the lack of manifest supersymmetry just necessitates cumbersome fine-tuning, but in the worse cases the chiral and/or Majorana nature of fermions makes it difficult to even formulate an appropriate lattice theory. We propose circumventing all these problems inherent in the lattice approach by adopting a nonlattice approach for one-dimensional supersymmetric gauge theories, which are important in the string or M theory context. In particular, our method can be used to investigate the gauge-gravity duality from first principles, and to simulate M theory based on the matrix theory conjecture.

  17. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  18. Vaccine instability in the cold chain: mechanisms, analysis and formulation strategies.

    PubMed

    Kumru, Ozan S; Joshi, Sangeeta B; Smith, Dawn E; Middaugh, C Russell; Prusik, Ted; Volkin, David B

    2014-09-01

    Instability of vaccines often emerges as a key challenge during clinical development (lab to clinic) as well as commercial distribution (factory to patient). To yield stable, efficacious vaccine dosage forms for human use, successful formulation strategies must address a combination of interrelated topics including stabilization of antigens, selection of appropriate adjuvants, and development of stability-indicating analytical methods. This review covers key concepts in understanding the causes and mechanisms of vaccine instability including (1) the complex and delicate nature of antigen structures (e.g., viruses, proteins, carbohydrates, protein-carbohydrate conjugates, etc.), (2) use of adjuvants to further enhance immune responses, (3) development of physicochemical and biological assays to assess vaccine integrity and potency, and (4) stabilization strategies to protect vaccine antigens and adjuvants (and their interactions) during storage. Despite these challenges, vaccines can usually be sufficiently stabilized for use as medicines through a combination of formulation approaches combined with maintenance of an efficient cold chain (manufacturing, distribution, storage and administration). Several illustrative case studies are described regarding mechanisms of vaccine instability along with formulation approaches for stabilization within the vaccine cold chain. These include live, attenuated (measles, polio) and inactivated (influenza, polio) viral vaccines as well as recombinant protein (hepatitis B) vaccines. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. An Integrated Neuroscience Perspective on Formulation and Treatment Planning for Posttraumatic Stress Disorder: An Educational Review.

    PubMed

    Ross, David A; Arbuckle, Melissa R; Travis, Michael J; Dwyer, Jennifer B; van Schalkwyk, Gerrit I; Ressler, Kerry J

    2017-04-01

    Posttraumatic stress disorder (PTSD) is a common psychiatric illness, increasingly in the public spotlight in the United States due its prevalence in the soldiers returning from combat in Iraq and Afghanistan. This educational review presents a contemporary approach for how to incorporate a modern neuroscience perspective into an integrative case formulation. The article is organized around key neuroscience "themes" most relevant for PTSD. Within each theme, the article highlights how seemingly diverse biological, psychological, and social perspectives all intersect with our current understanding of neuroscience. Any contemporary neuroscience formulation of PTSD should include an understanding of fear conditioning, dysregulated circuits, memory reconsolidation, epigenetics, and genetic factors. Fear conditioning and other elements of basic learning theory offer a framework for understanding how traumatic events can lead to a range of behaviors associated with PTSD. A circuit dysregulation framework focuses more broadly on aberrant network connectivity, including between the prefrontal cortex and limbic structures. In the process of memory reconsolidation, it is now clear that every time a memory is reactivated it becomes momentarily labile-with implications for the genesis, maintenance, and treatment of PTSD. Epigenetic changes secondary to various experiences, especially early in life, can have long-term effects, including on the regulation of the hypothalamic-pituitary-adrenal axis, thereby affecting an individual's ability to regulate the stress response. Genetic factors are surprisingly relevant: PTSD has been shown to be highly heritable despite being definitionally linked to specific experiences. The relevance of each of these themes to current clinical practice and its potential to transform future care are discussed. Together, these perspectives contribute to an integrative, neuroscience-informed approach to case formulation and treatment planning. This may help to bridge the gap between the traditionally distinct viewpoints of clinicians and researchers.

  20. An algorithmic approach to the brain biopsy--part I.

    PubMed

    Kleinschmidt-DeMasters, B K; Prayson, Richard A

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part I, we assist the resident in learning how to handle brain biopsies in general and how to distinguish nonneoplastic lesions that mimic tumors from true neoplasms.

  1. An algorithmic approach to the brain biopsy--part II.

    PubMed

    Prayson, Richard A; Kleinschmidt-DeMasters, B K

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.

  2. Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations

    DOE PAGES

    Chavez, Brittany K.; Agarabi, Cyrus D.; Read, Erik K.; ...

    2016-01-01

    Formulating appropriate storage conditions for biopharmaceutical proteins is essential for ensuring their stability and thereby their purity, potency, and safety over their shelf-life. Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case. The stability of the antibody in each buffer formulation was assessed by UV/VIS absorbance at 280 nm and 410 nm and size exclusion high performance liquid chromatography (SEC) to determine overall solubility, opalescence, and aggregate formation, respectively. Upon preliminary testing, acetate was eliminated as a potentialmore » storage buffer due to significant visible precipitate formation. An additional 2 4full factorial DoE was performed that combined the stabilizing effect of arginine with the buffering capacity of histidine. From this final DoE, an optimized formulation of 200 mM arginine, 50 mM histidine, and 100 mM NaCl at a pH of 6.5 was identified to substantially improve stability under long-term storage conditions and after multiple freeze/thaw cycles. Therefore, our data highlights the power of DoE based formulation screening approaches even for challenging monoclonal antibody molecules.« less

  3. Towards a Unified Quark-Hadron-Matter Equation of State for Applications in Astrophysics and Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Bastian, Niels-Uwe; Blaschke, David; Fischer, Tobias; Röpke, Gerd

    2018-05-01

    We outline an approach to a unified equation of state for quark-hadron matter on the basis of a $\\Phi-$derivable approach to the generalized Beth-Uhlenbeck equation of state for a cluster decomposition of thermodynamic quantities like the density. To this end we summarize the cluster virial expansion for nuclear matter and demonstrate the equivalence of the Green's function approach and the $\\Phi-$derivable formulation. For an example, the formation and dissociation of deuterons in nuclear matter is discussed. We formulate the cluster $\\Phi-$derivable approach to quark-hadron matter which allows to take into account the specifics of chiral symmetry restoration and deconfinement in triggering the Mott-dissociation of hadrons. This approach unifies the description of a strongly coupled quark-gluon plasma with that of a medium-modified hadron resonance gas description which are contained as limiting cases. The developed formalism shall replace the common two-phase approach to the description of the deconfinement and chiral phase transition that requires a phase transition construction between separately developed equations of state for hadronic and quark matter phases. Applications to the phenomenology of heavy-ion collisions and astrophysics are outlined.

  4. IARC use of oxidative stress as key mode of action characteristic for facilitating cancer classification: Glyphosate case example illustrating a lack of robustness in interpretative implementation.

    PubMed

    Bus, James S

    2017-06-01

    The International Agency for Research on Cancer (IARC) has formulated 10 key characteristics of human carcinogens to incorporate mechanistic data into cancer hazard classifications. The analysis used glyphosate as a case example to examine the robustness of IARC's determination of oxidative stress as "strong" evidence supporting a plausible cancer mechanism in humans. The IARC analysis primarily relied on 14 human/mammalian studies; 19 non-mammalian studies were uninformative of human cancer given the broad spectrum of test species and extensive use of formulations and aquatic testing. The mammalian studies had substantial experimental limitations for informing cancer mechanism including use of: single doses and time points; cytotoxic/toxic test doses; tissues not identified as potential cancer targets; glyphosate formulations or mixtures; technically limited oxidative stress biomarkers. The doses were many orders of magnitude higher than human exposures determined in human biomonitoring studies. The glyphosate case example reveals that the IARC evaluation fell substantially short of "strong" supporting evidence of oxidative stress as a plausible human cancer mechanism, and suggests that other IARC monographs relying on the 10 key characteristics approach should be similarly examined for a lack of robust data integration fundamental to reasonable mode of action evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Evaluation and Management of Traumatic Diaphragmatic Injuries: A Practice Management Guideline from the Eastern Association for the Surgery of Trauma.

    PubMed

    McDonald, Amy A; Robinson, Bryce R H; Alarcon, Louis; Bosarge, Patrick L; Dorion, Heath; Haut, Elliott R; Juern, Jeremy; Madbak, Firas; Reddy, Srinivas; Weiss, Patricia; Como, John J

    2018-04-02

    Traumatic diaphragm injuries (TDI) pose both diagnostic and therapeutic challenges in both the acute and chronic phases. There are no published practice management guidelines to date for TDI. We aim to formulate a practice management guideline for TDI using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology. The working group formulated five Patient, Intervention, Comparator, Outcome (PICO) questions regarding the following topics: 1) diagnostic approach (laparoscopy vs. computed tomography); 2) non-operative management of penetrating right-sided injuries; 3) surgical approach (abdominal or thoracic) for acute TDI, including 4) the use of laparoscopy; and 5) surgical approach (abdominal or thoracic) for delayed TDI. A systematic review was undertaken and last updated December 2016. RevMan 5 (Cochran Collaboration) and GRADEpro (Grade Working Group) software were utilized. Recommendations were voted on by working group members. Consensus was obtained for each recommendation. A total of 56 articles were utilized to formulate the recommendations. Most studies were retrospective case series with variable reporting of outcomes measures and outcomes frequently not stratified to intervention or comparator. The overall quality of the evidence was very low for all PICOs. Therefore, only conditional recommendations could be made. Recommendations were made in favor of laparoscopy over CT for diagnosis, non-operative vs. operative approach for right-sided penetrating injuries, abdominal vs. thoracic approach for acute TDI, and laparoscopy (with the appropriate skill set and resources) vs. open approach for isolated TDI. No recommendation could be made for the preferred operative approach for delayed TDI. Very low-quality evidence precluded any strong recommendations. Further study of the diagnostic and therapeutic approaches to TDI is warranted. Guideline LEVEL OF EVIDENCE: 4.

  6. The role of psychiatrists in diagnosing conversion disorder: a mixed-methods analysis.

    PubMed

    Kanaan, Richard A; Armstrong, David; Wessely, Simon

    2016-01-01

    Since DSM-5 removed the requirement for a psychosocial formulation, neurologists have been able to make the diagnosis of conversion disorder without psychiatric input. We sought to examine whether neurologists and specialist psychiatrists concurred with this approach. We used mixed methods, first surveying all the neurologists in the UK and then interviewing the neuropsychiatrists in a large UK region on the role of psychiatrists in diagnosing conversion disorder. Of the surveyed neurologists, 76% did not think that psychiatrists were essential for the diagnosis and 71% thought that psychiatrists did not even consider conversion disorder when referred a case. The neuropsychiatrists who were interviewed held complex models of conversion disorder. They believed all cases could be explained psychosocially in theory, but the nature of the diagnostic encounter often prevented it in practice; all felt that psychosocial formulation could be very helpful and some felt that it was essential to diagnosis. Although neurologists do not think psychiatrists are required for diagnosing conversion disorder, specialist psychiatrists disagree, at least in some cases.

  7. Efficacy and Physicochemical Evaluation of an Optimized Semisolid Formulation of Povidone Iodine Proposed by Extreme Vertices Statistical Design; a Practical Approach

    PubMed Central

    Lotfipour, Farzaneh; Valizadeh, Hadi; Shademan, Shahin; Monajjemzadeh, Farnaz

    2015-01-01

    One of the most significant issues in pharmaceutical industries, prior to commercialization of a pharmaceutical preparation is the "preformulation" stage. However, far too attention has been paid to verification of the software assisted statistical designs in preformulation studies. The main aim of this study was to report a step by step preformulation approach for a semisolid preparation based on a statistical mixture design and to verify the predictions made by the software with an in-vitro efficacy bioassay test. Extreme vertices mixture design (4 factors, 4 levels) was applied for preformulation of a semisolid Povidone Iodine preparation as Water removable ointment using different PolyEthylenGlycoles. Software Assisted (Minitab) analysis was then performed using four practically assessed response values including; Available iodine, viscosity (N index and yield value) and water absorption capacity. Subsequently mixture analysis was performed and finally, an optimized formulation was proposed. The efficacy of this formulation was bio-assayed using microbial tests in-vitro and MIC values were calculated for Escherichia coli, pseudomonaaeruginosa, staphylococcus aureus and Candida albicans. Results indicated the acceptable conformity of the measured responses. Thus, it can be concluded that the proposed design had an adequate power to predict the responses in practice. Stability studies, proved no significant change during the one year study for the optimized formulation. Efficacy was eligible on all tested species and in the case of staphylococcus aureus; the prepared semisolid formulation was even more effective. PMID:26664368

  8. Lagrangian formulation of the general relativistic Poynting-Robertson effect

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Battista, Emmanuele; Falanga, Maurizio

    2018-04-01

    We propose the Lagrangian formulation for describing the motion of a test particle in a general relativistic, stationary, and axially symmetric spacetime. The test particle is also affected by a radiation field, modeled as a coherent flux of photons traveling along the null geodesics of the background spacetime, including the general relativistic Poynting-Robertson effect. The innovative part of this work is to prove the existence of the potential linked to the dissipative action caused by the Poynting-Robertson effect in general relativity through the help of an integrating factor, depending on the energy of the system. Generally, such kinds of inverse problems involving dissipative effects might not admit a Lagrangian formulation; especially, in general relativity, there are no examples of such attempts in the literature so far. We reduce this general relativistic Lagrangian formulation to the classic case in the weak-field limit. This approach facilitates further studies in improving the treatment of the radiation field, and it contains, for example, some implications for a deeper comprehension of the gravitational waves.

  9. The artificial membrane insert system as predictive tool for formulation performance evaluation.

    PubMed

    Berben, Philippe; Brouwers, Joachim; Augustijns, Patrick

    2018-02-15

    In view of the increasing interest of pharmaceutical companies for cell- and tissue-free models to implement permeation into formulation testing, this study explored the capability of an artificial membrane insert system (AMI-system) as predictive tool to evaluate the performance of absorption-enabling formulations. Firstly, to explore the usefulness of the AMI-system in supersaturation assessment, permeation was monitored after induction of different degrees of loviride supersaturation. Secondly, to explore the usefulness of the AMI-system in formulation evaluation, a two-stage dissolution test was performed prior to permeation assessment. Different case examples were selected based on the availability of in vivo (intraluminal and systemic) data: (i) a suspension of posaconazole (Noxafil ® ), (ii) a cyclodextrin-based formulation of itraconazole (Sporanox ® ), and (iii) a micronized (Lipanthyl ® ) and nanosized (Lipanthylnano ® ) formulation of fenofibrate. The obtained results demonstrate that the AMI-system is able to capture the impact of loviride supersaturation on permeation. Furthermore, the AMI-system correctly predicted the effects of (i) formulation pH on posaconazole absorption, (ii) dilution on cyclodextrin-based itraconazole absorption, and (iii) food intake on fenofibrate absorption. Based on the applied in vivo/in vitro approach, the AMI-system combined with simple dissolution testing appears to be a time- and cost-effective tool for the early-stage evaluation of absorption-enabling formulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Rap Music Literacy: A Case Study of Millennial Audience Reception to Rap Lyrics Depicting Independent Women

    ERIC Educational Resources Information Center

    Moody-Ramirez, Mia; Scott, Lakia M.

    2015-01-01

    Using a feminist lens and a constructivist approach as the theoretical framework, we used rap lyrics and videos to help college students explore mass media's representation of the "independent" Black woman and the concept of "independence" in general. Students must be able to formulate their own concept of independence to…

  11. How Was the Activity? A Visualization Support for a Case of Location-Based Learning Design

    ERIC Educational Resources Information Center

    Melero, Javier; Hernández-Leo, Davinia; Sun, Jing; Santos, Patricia; Blat, Josep

    2015-01-01

    Over the last few years, the use of mobile technologies has brought the formulation of location-based learning approaches shaping new or enhanced educational activities. Involving teachers in the design of these activities is important because the designs need to be aligned with the requirements of the specific educational settings. Yet analysing…

  12. Quality by design approach for developing chitosan-Ca-alginate microspheres for colon delivery of celecoxib-hydroxypropyl-β-cyclodextrin-PVP complex.

    PubMed

    Mennini, N; Furlanetto, S; Cirri, M; Mura, P

    2012-01-01

    The aim of the present work was to develop a new multiparticulate system, designed for colon-specific delivery of celecoxib for both systemic (in chronotherapic treatment of arthritis) and local (in prophylaxis of colon carcinogenesis) therapy. The system simultaneously benefits from ternary complexation with hydroxypropyl-β-cyclodextrin and PVP (polyvinylpyrrolidone), to increase drug solubility, and vectorization in chitosan-Ca-alginate microspheres, to exploit the colon-specific carrier properties of these polymers. Statistical experimental design was employed to investigate the combined effect of four formulation variables, i.e., % of alginate, CaCl₂, and chitosan and time of cross-linking on microsphere entrapment efficiency (EE%) and drug amount released after 4h in colonic medium, considered as the responses to be optimized. Design of experiment was used in the context of Quality by Design, which requires a multivariate approach for understanding the multifactorial relationships among formulation parameters. Doehlert design allowed for defining a design space, which revealed that variations of the considered factors had in most cases an opposite influence on the responses. Desirability function was used to attain simultaneous optimization of both responses. The desired goals were achieved for both systemic and local use of celecoxib. Experimental values obtained from the optimized formulations were in both cases very close to the predicted values, thus confirming the validity of the generated mathematical model. These results demonstrated the effectiveness of the proposed jointed use of drug cyclodextrin complexation and chitosan-Ca-alginate microsphere vectorization, as well as the usefulness of the multivariate approach for the preparation of colon-targeted celecoxib microspheres with optimized properties. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Flow force and torque on submerged bodies in lattice-Boltzmann methods via momentum exchange.

    PubMed

    Giovacchini, Juan P; Ortiz, Omar E

    2015-12-01

    We review the momentum exchange method to compute the flow force and torque on a submerged body in lattice-Boltzmann methods by presenting an alternative derivation. Our derivation does not depend on a particular implementation of the boundary conditions at the body surface, and it relies on general principles. After the introduction of the momentum exchange method in lattice-Boltzmann methods, some formulations were introduced to compute the fluid force on static and moving bodies. These formulations were introduced in a rather intuitive, ad hoc way. In our derivation, we recover the proposals most frequently used, in some cases with minor corrections, gaining some insight into the two most used formulations. At the end, we present some numerical tests to compare different approaches on a well-known benchmark test that support the correctness of the formulas derived.

  14. The exponentiated Hencky energy: anisotropic extension and case studies

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; von Hoegen, Markus; Neff, Patrizio

    2017-10-01

    In this paper we propose an anisotropic extension of the isotropic exponentiated Hencky energy, based on logarithmic strain invariants. Unlike other elastic formulations, the isotropic exponentiated Hencky elastic energy has been derived solely on differential geometric grounds, involving the geodesic distance of the deformation gradient \\varvec{F} to the group of rotations. We formally extend this approach towards anisotropy by defining additional anisotropic logarithmic strain invariants with the help of suitable structural tensors and consider our findings for selected case studies.

  15. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  16. Diffuse-Interface Methods in Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Anderson, D. M.; McFadden, G. B.; Wheeler, A. A.

    1997-01-01

    The authors review the development of diffuse-interface models of hydrodynamics and their application to a wide variety of interfacial phenomena. The authors discuss the issues involved in formulating diffuse-interface models for single-component and binary fluids. Recent applications and computations using these models are discussed in each case. Further, the authors address issues including sharp-interface analyses that relate these models to the classical free-boundary problem, related computational approaches to describe interfacial phenomena, and related approaches describing fully-miscible fluids.

  17. Post Pareto optimization-A case

    NASA Astrophysics Data System (ADS)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  18. A stitch in time saves nine? A repeated cross-sectional case study on the implementation of the intersectoral community approach Youth At a Healthy Weight.

    PubMed

    van der Kleij, Rianne M J J; Crone, Mathilde R; Paulussen, Theo G W M; van de Gaar, Vivan M; Reis, Ria

    2015-10-08

    The implementation of programs complex in design, such as the intersectoral community approach Youth At a Healthy Weight (JOGG), often deviates from their application as intended. There is limited knowledge of their implementation processes, making it difficult to formulate sound implementation strategies. For two years, we performed a repeated cross-sectional case study on the implementation of a JOGG fruit and water campaign targeting children age 0-12. Semi-structured observations, interviews, field notes and professionals' logs entries were used to evaluate implementation process. Data was analyzed via a framework approach; within-case and cross-case displays were formulated and key determinants identified. Principles from Qualitative Comparative Analysis (QCA) were used to identify causal configurations of determinants per sector and implementation phase. Implementation completeness differed, but was highest in the educational and health care sector, and higher for key than additional activities. Determinants and causal configurations of determinants were mostly sector- and implementation phase specific. High campaign ownership and possibilities for campaign adaptation were most frequently mentioned as facilitators. A lack of reinforcement strategies, low priority for campaign use and incompatibility of own goals with campaign goals were most often indicated as barriers. We advise multiple 'stitches in time'; tailoring implementation strategies to specific implementation phases and sectors using both the results from this study and a mutual adaptation strategy in which professionals are involved in the development of implementation strategies. The results of this study show that the implementation process of IACOs is complex and sustainable implementation is difficult to achieve. Moreover, this study reveals that the implementation process is influenced by predominantly sector and implementation phase specific (causal configurations of) determinants.

  19. Milk as a medium for pediatric formulations: Experimental findings and regulatory aspects.

    PubMed

    Soulele, Konstantina; Macheras, Panos

    2015-08-15

    In the case of pediatric medicinal products the selection of an appropriate and palatable liquid dosage form can make the difference between treatment success and failure. Since the recent adoption of Pediatric Regulations in the U.S. and E.U., there is a greater demand for age-appropriate medicines for children. Extended research on the use of milk on drug administration in pediatric population has shown the multiple benefits of its use. Milk exhibits great solubilizing, gastroprotective and taste masking properties, which are very important characteristics in the case of insoluble, irritating and bitter-tasting active compounds. Milk-based formulations rely on a novel, simple and user-friendly approach for the delivery of ionized and unionized lipophilic drugs. In parallel they can provide critical nutritive elements and a wide range of biologically active peptides, very important elements especially for pediatric patients. Copyright © 2015. Published by Elsevier B.V.

  20. Thiolated chitosans: design and in vivo evaluation of a mucoadhesive buccal peptide drug delivery system.

    PubMed

    Langoth, Nina; Kahlbacher, Hermann; Schöffmann, Gudrun; Schmerold, Ivo; Schuh, Maximilian; Franz, Sonja; Kurka, Peter; Bernkop-Schnürch, Andreas

    2006-03-01

    Intravenous application of pituitary adenylate cyclase-activating polypeptide (PACAP) has been identified as a promising strategy for the treatment of type 2 diabetes. To generate a more applicable formulation, it was the aim of this study to develop a sustained buccal delivery system for this promising therapeutic peptide. 2-Iminothiolane was covalently bound to chitosan to improve the mucoadhesive and permeation-enhancing properties of chitosan used as drug carrier matrix. The resulting chitosan-4-thiobutylamidine conjugate was homogenized with the enzyme inhibitor and permeation mediator glutathione (gamma-Glu-Cys-Gly), Brij 35, and PACAP (formulation A). The mixture was lyophilized and compressed into flat-faced discs (18 mm in diameter). One formulation was additionally coated on one side with palm wax (formulation B). Tablets consisting of unmodified chitosan and PACAP (formulation C) or of unmodified chitosan, Brij 35, and PACAP (formulation D) served as controls. Bioavailability studies were performed in pigs by buccal administration of these test formulations. Blood samples were analyzed via an ELISA method. Formulations A and B led to an absolute bioavailability of 1%, whereas PACAP did not reach the systemic circulation when administered via formulations C and D. Moreover, in the case of formulations A and B, a continuously raised plasma level of the peptide drug being in the therapeutic range could be maintained over the whole period of application (6 h). Formulations A and B were removed by moderate force from the buccal mucosa after 6 h, whereas formulations C and D detached from the mucosa 4 h after application. The study reveals this novel mucoadhesive delivery system to be a promising approach for buccal delivery of PACAP.

  1. Case Formulation in Psychotherapy: Revitalizing Its Usefulness as a Clinical Tool

    ERIC Educational Resources Information Center

    Sim, Kang; Gwee, Kok Peng; Bateman, Anthony

    2005-01-01

    Objective: Case formulation has been recognized to be a useful conceptual and clinical tool in psychotherapy as diagnosis itself does not focus on the underlying causes of a patient's problems. Case formulation can fill the gap between diagnosis and treatment, with the potential to provide insights into the integrative, explanatory, prescriptive,…

  2. Run-up parameterization and beach vulnerability assessment on a barrier island: a downscaling approach

    NASA Astrophysics Data System (ADS)

    Medellín, G.; Brinkkemper, J. A.; Torres-Freyermuth, A.; Appendini, C. M.; Mendoza, E. T.; Salles, P.

    2016-01-01

    We present a downscaling approach for the study of wave-induced extreme water levels at a location on a barrier island in Yucatán (Mexico). Wave information from a 30-year wave hindcast is validated with in situ measurements at 8 m water depth. The maximum dissimilarity algorithm is employed for the selection of 600 representative cases, encompassing different combinations of wave characteristics and tidal level. The selected cases are propagated from 8 m water depth to the shore using the coupling of a third-generation wave model and a phase-resolving non-hydrostatic nonlinear shallow-water equation model. Extreme wave run-up, R2%, is estimated for the simulated cases and can be further employed to reconstruct the 30-year time series using an interpolation algorithm. Downscaling results show run-up saturation during more energetic wave conditions and modulation owing to tides. The latter suggests that the R2% can be parameterized using a hyperbolic-like formulation with dependency on both wave height and tidal level. The new parametric formulation is in agreement with the downscaling results (r2 = 0.78), allowing a fast calculation of wave-induced extreme water levels at this location. Finally, an assessment of beach vulnerability to wave-induced extreme water levels is conducted at the study area by employing the two approaches (reconstruction/parameterization) and a storm impact scale. The 30-year extreme water level hindcast allows the calculation of beach vulnerability as a function of return periods. It is shown that the downscaling-derived parameterization provides reasonable results as compared with the numerical approach. This methodology can be extended to other locations and can be further improved by incorporating the storm surge contributions to the extreme water level.

  3. Design of optimal groundwater remediation systems under flexible environmental-standard constraints.

    PubMed

    Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing

    2015-01-01

    In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.

  4. Advanced development of the boundary element method for elastic and inelastic thermal stress analysis. Ph.D. Thesis, 1987 Final Report

    NASA Technical Reports Server (NTRS)

    Henry, Donald P., Jr.

    1991-01-01

    The focus of this dissertation is on advanced development of the boundary element method for elastic and inelastic thermal stress analysis. New formulations for the treatment of body forces and nonlinear effects are derived. These formulations, which are based on particular integral theory, eliminate the need for volume integrals or extra surface integrals to account for these effects. The formulations are presented for axisymmetric, two and three dimensional analysis. Also in this dissertation, two dimensional and axisymmetric formulations for elastic and inelastic, inhomogeneous stress analysis are introduced. The derivatives account for inhomogeneities due to spatially dependent material parameters, and thermally induced inhomogeneities. The nonlinear formulation of the present work are based on an incremental initial stress approach. Two inelastic solutions algorithms are implemented: an iterative; and a variable stiffness type approach. The Von Mises yield criterion with variable hardening and the associated flow rules are adopted in these algorithms. All formulations are implemented in a general purpose, multi-region computer code with the capability of local definition of boundary conditions. Quadratic, isoparametric shape functions are used to model the geometry and field variables of the boundary (and domain) of the problem. The multi-region implementation permits a body to be modeled in substructured parts, thus dramatically reducing the cost of analysis. Furthermore, it allows a body consisting of regions of different (homogeneous) material to be studied. To test the program, results obtained for simple test cases are checked against their analytic solutions. Thereafter, a range of problems of practical interest are analyzed. In addition to displacement and traction loads, problems with body forces due to self-weight, centrifugal, and thermal loads are considered.

  5. Misleading inferences from discretization of empty spacetime: Snyder-noncommutativity case study

    NASA Astrophysics Data System (ADS)

    Amelino-Camelia, Giovanni; Astuti, Valerio

    2015-06-01

    Alternative approaches to the study of the quantum gravity problem are handling the role of spacetime very differently. Some are focusing on the analysis of one or another novel formulation of "empty spacetime", postponing to later stages the introduction of particles and fields, while other approaches assume that spacetime should only be an emergent entity. We here argue that recent progress in the covariant formulation of quantum mechanics, suggests that empty spacetime is not physically meaningful. We illustrate our general thesis in the specific context of the noncommutative Snyder spacetime, which is also of some intrinsic interest, since hundreds of studies were devoted to its analysis. We show that empty Snyder spacetime, described in terms of a suitable kinematical Hilbert space, is discrete, but this is only a formal artifact: the discreteness leaves no trace on the observable properties of particles on the physical Hilbert space.

  6. An inverse model for a free-boundary problem with a contact line: Steady case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, Oleg; Protas, Bartosz

    2009-07-20

    This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less

  7. Numerical study of time domain analogy applied to noise prediction from rotating blades

    NASA Astrophysics Data System (ADS)

    Fedala, D.; Kouidri, S.; Rey, R.

    2009-04-01

    Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.

  8. Human exploration mission studies

    NASA Technical Reports Server (NTRS)

    Cataldo, Robert L.

    1989-01-01

    The Office of Exploration has established a process whereby all NASA field centers and other NASA Headquarters offices participate in the formulation and analysis of a wide range of mission strategies. These strategies were manifested into specific scenarios or candidate case studies. The case studies provided a systematic approach into analyzing each mission element. First, each case study must address several major themes and rationale including: national pride and international prestige, advancement of scientific knowledge, a catalyst for technology, economic benefits, space enterprise, international cooperation, and education and excellence. Second, the set of candidate case studies are formulated to encompass the technology requirement limits in the life sciences, launch capabilities, space transfer, automation, and robotics in space operations, power, and propulsion. The first set of reference case studies identify three major strategies: human expeditions, science outposts, and evolutionary expansion. During the past year, four case studies were examined to explore these strategies. The expeditionary missions include the Human Expedition to Phobos and Human Expedition to Mars case studies. The Lunar Observatory and Lunar Outpost to Early Mars Evolution case studies examined the later two strategies. This set of case studies established the framework to perform detailed mission analysis and system engineering to define a host of concepts and requirements for various space systems and advanced technologies. The details of each mission are described and, specifically, the results affecting the advanced technologies required to accomplish each mission scenario are presented.

  9. Klein's Plan B in the Early Teaching of Analysis: Two Theoretical Cases of Exploring Mathematical Links

    ERIC Educational Resources Information Center

    Kondratieva, Margo; Winsløw, Carl

    2018-01-01

    We present a theoretical approach to the problem of the transition from Calculus to Analysis within the undergraduate mathematics curriculum. First, we formulate this problem using the anthropological theory of the didactic, in particular the notion of praxeology, along with a possible solution related to Klein's "Plan B": here,…

  10. Novel fluorinated surfactants tentatively identified in firefighters using liquid chromatography quadrupole time-of-flight tandem mass spectrometry and a case-control approach.

    PubMed

    Rotander, Anna; Kärrman, Anna; Toms, Leisa-Maree L; Kay, Margaret; Mueller, Jochen F; Gómez Ramos, María José

    2015-02-17

    Fluorinated surfactant-based aqueous film-forming foams (AFFFs) are made up of per- and polyfluorinated alkyl substances (PFAS) and are used to extinguish fires involving highly flammable liquids. The use of perfluorooctanesulfonic acid (PFOS) and other perfluoroalkyl acids (PFAAs) in some AFFF formulations has been linked to substantial environmental contamination. Recent studies have identified a large number of novel and infrequently reported fluorinated surfactants in different AFFF formulations. In this study, a strategy based on a case-control approach using quadrupole time-of-flight tandem mass spectrometry (QTOF-MS/MS) and advanced statistical methods has been used to extract and identify known and unknown PFAS in human serum associated with AFFF-exposed firefighters. Two target sulfonic acids [PFOS and perfluorohexanesulfonic acid (PFHxS)], three non-target acids [perfluoropentanesulfonic acid (PFPeS), perfluoroheptanesulfonic acid (PFHpS), and perfluorononanesulfonic acid (PFNS)], and four unknown sulfonic acids (Cl-PFOS, ketone-PFOS, ether-PFHxS, and Cl-PFHxS) were exclusively or significantly more frequently detected at higher levels in firefighters compared to controls. The application of this strategy has allowed for identification of previously unreported fluorinated chemicals in a timely and cost-efficient way.

  11. [The requirements of standard and conditions of interchangeability of medical articles].

    PubMed

    Men'shikov, V V; Lukicheva, T I

    2013-11-01

    The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.

  12. Surface integral analogy approaches for predicting noise from 3D high-lift low-noise wings

    NASA Astrophysics Data System (ADS)

    Yao, Hua-Dong; Davidson, Lars; Eriksson, Lars-Erik; Peng, Shia-Hui; Grundestam, Olof; Eliasson, Peter E.

    2014-06-01

    Three surface integral approaches of the acoustic analogies are studied to predict the noise from three conceptual configurations of three-dimensional high-lift low-noise wings. The approaches refer to the Kirchhoff method, the Ffowcs Williams and Hawkings (FW-H) method of the permeable integral surface and the Curle method that is known as a special case of the FW-H method. The first two approaches are used to compute the noise generated by the core flow region where the energetic structures exist. The last approach is adopted to predict the noise specially from the pressure perturbation on the wall. A new way to construct the integral surface that encloses the core region is proposed for the first two methods. Considering the local properties of the flow around the complex object-the actual wing with high-lift devices-the integral surface based on the vorticity is constructed to follow the flow structures. The surface location is discussed for the Kirchhoff method and the FW-H method because a common surface is used for them. The noise from the core flow region is studied on the basis of the dependent integral quantities, which are indicated by the Kirchhoff formulation and by the FW-H formulation. The role of each wall component on noise contribution is analyzed using the Curle formulation. Effects of the volume integral terms of Lighthill's stress tensors on the noise prediction are then evaluated by comparing the results of the Curle method with the other two methods.

  13. Application of the BCS biowaiver approach to assessing bioequivalence of orally disintegrating tablets with immediate release formulations.

    PubMed

    Ono, Asami; Sugano, Kiyohiko

    2014-11-20

    The aim of this study was to compare the dissolution profiles of oral disintegrating tablets (ODTs) and immediate release (IR) formulations in order to experimentally validate the regulatory biowaiver scheme (BWS) for biopharmaceutical classification system (BCS) class III drugs. We examined six drugs that show clinical bioequivalence between the ODTs and IR formulations: taltirelin, olopatadine, droxidopa, famotidine, fexofenadine, and hydrochlorothiazide. The dissolution profiles of these drugs were evaluated using the compendium paddle apparatus at pH 1.2 and 6.8. Taltirelin and olopatadine showed very rapid dissolution and met the dissolution criteria in the BWS, whereas droxidopa, famotidine, fexofenadine, and hydrochlorothiazide did not. Furthermore, in the case of famotidine, fexofenadine, and hydrochlorothiazide, the ODTs and IR formulations showed dissimilar dissolution profiles. The dose-to-solubility ratio (D:S) of these drugs was larger than that of the other drugs. The results of this study suggest that extension of the BCS-BWS to ODTs and IR formulations of BCS class III drugs is appropriate. Furthermore, for BCS class III drugs with relatively high D:S, clinical bioequivalence would be achievable even when two formulations showed different dissolution profiles in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A Cultural Formulation Approach to Career Assessment and Career Counseling with Asian American Clients

    ERIC Educational Resources Information Center

    Leong, Frederick T. L.; Hardin, Erin E.; Gupta, Arpana

    2010-01-01

    Using the cultural formulations approach to career assessment and career counseling, the current article applies it specifically to Asian American clients. The approach is illustrated by using the "Diagnostic and Statistical Manual of Mental Disorders" fourth edition ("DSM-IV") Outline for Cultural Formulations that consists of the following five…

  15. PLAN-TA9-2443(U), Rev. B Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Geoffrey Wayne

    2016-03-16

    This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less

  16. Geometry of modified release formulations during dissolution--influence on performance of dosage forms with diclofenac sodium.

    PubMed

    Dorożyński, Przemysław; Kulinowski, Piotr; Jamróz, Witold; Juszczyk, Ewelina

    2014-12-30

    The objectives of the work included: presentation of magnetic resonance imaging (MRI) and fractal analysis based approach to comparison of dosage forms of different composition, structure, and assessment of the influence of the compositional factors i.e., matrix type, excipients etc., on properties and performance of the dosage form during drug dissolution. The work presents the first attempt to compare MRI data obtained for tablet formulations of different composition and characterized by distinct differences in hydration and drug dissolution mechanisms. The main difficulty, in such a case stems from differences in hydration behavior and tablet's geometry i.e., swelling, cracking, capping etc. A novel approach to characterization of matrix systems i.e., quantification of changes of geometrical complexity of the matrix shape during drug dissolution has been developed. Using three chosen commercial modified release tablet formulations with diclofenac sodium we present the method of parameterization of their geometrical complexity on the base of fractal analysis. The main result of the study is the correlation between the hydrating tablet behavior and drug dissolution - the increase of geometrical complexity expressed as fractal dimension relates to the increased variability of drug dissolution results. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Using exposure bands for rapid decision making in the ...

    EPA Pesticide Factsheets

    The ILSI Health and Environmental Sciences Institute (HESI) Risk Assessment in the 21st Century (RISK21) project was initiated to address and catalyze improvements in human health risk assessment. RISK21 is a problem formulation-based conceptual roadmap and risk matrix visualization tool, facilitating transparent evaluation of both hazard and exposure components. The RISK21 roadmap is exposure-driven, i.e. exposure is used as the second step (after problem formulation) to define and focus the assessment. This paper describes the exposure tiers of the RISK21 matrix and the approaches to adapt readily available information to more quickly inform exposure at a screening level. In particular, exposure look-up tables were developed from available exposure tools (European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Targeted Risk Assessment (TRA) for worker exposure, ECETOC TRA, European Solvents Industry Group (ESIG) Generic Exposure Scenario (GES) Risk and Exposure Tool (EGRET) for consumer exposure, and USEtox for indirect exposure to humans via the environment) were tested in a hypothetical mosquito bed netting case study. A detailed WHO risk assessment for a similar mosquito net use served as a benchmark for the performance of the RISK21 approach. The case study demonstrated that the screening methodologies provided suitable conservative exposure estimates for risk assessment. The results of this effort showed that the RISK21 approach is useful f

  18. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  19. Geometric approach to nuclear pasta phases

    NASA Astrophysics Data System (ADS)

    Kubis, Sebastian; Wójcik, Włodzimierz

    2016-12-01

    By use of the variational methods and differential geometry in the framework of the liquid drop model we formulate appropriate equilibrium equations for pasta phases with imposed periodicity. The extension of the Young-Laplace equation in the case of charged fluid is obtained. The β equilibrium and virial theorem are also generalized. All equations are shown in gauge invariant form. For the first time, the pasta shape stability analysis is carried out. The proper stability condition in the form of the generalized Jacobi equation is derived. The presented formalism is tested on some particular cases.

  20. Elliptic Relaxation of a Tensor Representation for the Redistribution Terms in a Reynolds Stress Turbulence Model

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Gatski, T. B.

    2002-01-01

    A formulation to include the effects of wall proximity in a second-moment closure model that utilizes a tensor representation for the redistribution terms in the Reynolds stress equations is presented. The wall-proximity effects are modeled through an elliptic relaxation process of the tensor expansion coefficients that properly accounts for both correlation length and time scales as the wall is approached. Direct numerical simulation data and Reynolds stress solutions using a full differential approach are compared for the case of fully developed channel flow.

  1. Minimally Invasive Dentistry: A Conservative Approach to Smile Makeover.

    PubMed

    Rosenberg, Jeffrey M

    2017-01-01

    The concept of minimally invasive dentistry is based on preserving tooth structure, especially enamel. A conservative method to treat discolored teeth that have diastemas is a freehand additive technique using composite resin. While selecting the correct shade of resin can be challenging, newer composite resin formulations are being developed with optical properties that enable the material to more effectively blend into the dentition. This case report describes the use of conservative approaches and materials to treat discolored, unevenly spaced teeth and restore harmony and balance to a patient's smile.

  2. A realist evaluation of the management of a well- performing regional hospital in Ghana

    PubMed Central

    2010-01-01

    Background Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. Methods We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. Results We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. Conclusion This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers. PMID:20100330

  3. A realist evaluation of the management of a well-performing regional hospital in Ghana.

    PubMed

    Marchal, Bruno; Dedzo, McDamien; Kegels, Guy

    2010-01-25

    Realist evaluation offers an interesting approach to evaluation of interventions in complex settings, but has been little applied in health care. We report on a realist case study of a well performing hospital in Ghana and show how such a realist evaluation design can help to overcome the limited external validity of a traditional case study. We developed a realist evaluation framework for hypothesis formulation, data collection, data analysis and synthesis of the findings. Focusing on the role of human resource management in hospital performance, we formulated our hypothesis around the high commitment management concept. Mixed methods were used in data collection, including individual and group interviews, observations and document reviews. We found that the human resource management approach (the actual intervention) included induction of new staff, training and personal development, good communication and information sharing, and decentralised decision-making. We identified 3 additional practices: ensuring optimal physical working conditions, access to top managers and managers' involvement on the work floor. Teamwork, recognition and trust emerged as key elements of the organisational climate. Interviewees reported high levels of organisational commitment. The analysis unearthed perceived organisational support and reciprocity as underlying mechanisms that link the management practices with commitment. Methodologically, we found that realist evaluation can be fruitfully used to develop detailed case studies that analyse how management interventions work and in which conditions. Analysing the links between intervention, mechanism and outcome increases the explaining power, while identification of essential context elements improves the usefulness of the findings for decision-makers in other settings (external validity). We also identified a number of practical difficulties and priorities for further methodological development. This case suggests that a well-balanced HRM bundle can stimulate organisational commitment of health workers. Such practices can be implemented even with narrow decision spaces. Realist evaluation provides an appropriate approach to increase the usefulness of case studies to managers and policymakers.

  4. The role of psychiatrists in diagnosing conversion disorder: a mixed-methods analysis

    PubMed Central

    Kanaan, Richard A; Armstrong, David; Wessely, Simon

    2016-01-01

    Objective Since DSM-5 removed the requirement for a psychosocial formulation, neurologists have been able to make the diagnosis of conversion disorder without psychiatric input. We sought to examine whether neurologists and specialist psychiatrists concurred with this approach. Design We used mixed methods, first surveying all the neurologists in the UK and then interviewing the neuropsychiatrists in a large UK region on the role of psychiatrists in diagnosing conversion disorder. Results Of the surveyed neurologists, 76% did not think that psychiatrists were essential for the diagnosis and 71% thought that psychiatrists did not even consider conversion disorder when referred a case. The neuropsychiatrists who were interviewed held complex models of conversion disorder. They believed all cases could be explained psychosocially in theory, but the nature of the diagnostic encounter often prevented it in practice; all felt that psychosocial formulation could be very helpful and some felt that it was essential to diagnosis. Conclusion Although neurologists do not think psychiatrists are required for diagnosing conversion disorder, specialist psychiatrists disagree, at least in some cases. PMID:27274253

  5. The Handy Approach - Quick Integrated Person Centred Support Preparation.

    PubMed

    Risi, Liliana; Brown, Juliette; Sugarhood, Paul; Depala, Babalal; Olowosoyo, Abi; Tomu, Cynthia; Gonzalez, Lorena; Munoz-Cobo, Maloles; Adekunle, Oladimeji; Ogwal, Okumu; Evans, Eirlys; Shah, Amar

    2017-01-01

    Cost effective care requires comprehensive person-centred formulation of solutions. The East London NHS Foundation Trust Community Health Services in Newham have piloted models of Integrated Care called 'Virtual Wards' which aim to keep people living with multiple long-term conditions, well at home by minimising system complexity. These Virtual Wards comprise Interdisciplinary Teams (IDTs) with a General Practitioner (GP) seconded to provide leadership. Historically assessments have been dominated by biomedical approaches with disability emphasised over personal aspirations and ability. New professional skills are needed to organise information from diverse approaches into a common framework, which can enable agreed goals of care to be delivered collaboratively. From June 2014 to January 2016 we aimed to improve the documentation of person-centred goals of care in 100% of our assessments. Change ideas were tested and team development addressed to improve documentation of aspirations for care for people being referred and if achieved, then to test ideas to improve coproduction of care. Change ideas included Enhanced Clinical Supervision (ECS) by a GP with additional expert skills; Flash Teaching (FT) defined as five-minute weekly discussion on topics generated from the case-mix to develop a shared understanding of Integrated Care; Structured Formulation using a novel, quick, integrated assessment framework called the Handy Approach (HA) with the hand as a memory prompt to bring the personal together with the mental, social and physical domains and finally we tested focusing on 'Team Primacy' (mutual regard within the team) to embed behaviour change. 181 cases were tracked and documentation of personal aspirations for care by case showed: ECS 0/21 (0%); FT 5/50 (10%); ECS/FT plus the HA 35/83 (42%); Team Primacy plus ECS/FT/HA 27/27 (100%). By January 2016 prompted by using the Handy Approach in a highly functional team, all members of the IDT consistently documented personal aspirations.

  6. Challenges and strategies to facilitate formulation development of pediatric drug products: Safety qualification of excipients.

    PubMed

    Buckley, Lorrene A; Salunke, Smita; Thompson, Karen; Baer, Gerri; Fegley, Darren; Turner, Mark A

    2018-02-05

    A public workshop entitled "Challenges and strategies to facilitate formulation development of pediatric drug products" focused on current status and gaps as well as recommendations for risk-based strategies to support the development of pediatric age-appropriate drug products. Representatives from industry, academia, and regulatory agencies discussed the issues within plenary, panel, and case-study breakout sessions. By enabling practical and meaningful discussion between scientists representing the diversity of involved disciplines (formulators, nonclinical scientists, clinicians, and regulators) and geographies (eg, US, EU), the Excipients Safety workshop session was successful in providing specific and key recommendations for defining paths forward. Leveraging orthogonal sources of data (eg. food industry, agro science), collaborative data sharing, and increased awareness of the existing sources such as the Safety and Toxicity of Excipients for Paediatrics (STEP) database will be important to address the gap in excipients knowledge needed for risk assessment. The importance of defining risk-based approaches to safety assessments for excipients vital to pediatric formulations was emphasized, as was the need for meaningful stakeholder (eg, patient, caregiver) engagement. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  8. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  9. A developmental, biopsychosocial model for the treatment of children with gender identity disorder.

    PubMed

    Zucker, Kenneth J; Wood, Hayley; Singh, Devita; Bradley, Susan J

    2012-01-01

    This article provides a summary of the therapeutic model and approach used in the Gender Identity Service at the Centre for Addiction and Mental Health in Toronto. The authors describe their assessment protocol, describe their current multifactorial case formulation model, including a strong emphasis on developmental factors, and provide clinical examples of how the model is used in the treatment.

  10. The Situations Bank, a Tool for Curriculum Design Focused on Daily Realities: The Case of the Reform in Niger

    ERIC Educational Resources Information Center

    Charland, Patrick; Cyr, Stéphane

    2013-01-01

    In the context of the curriculum reform in Niger, the authors describe the process of developing a situations bank which focusses on everyday life situations in Niger. The bank plays a central role in the formulation of new study programmes guided by the so-called "situated" approach. The authors also describe various issues that arose…

  11. The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates

    NASA Astrophysics Data System (ADS)

    D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier

    2018-01-01

    This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.

  12. A tutorial for developing a topical cream formulation based on the Quality by Design approach.

    PubMed

    Simões, Ana; Veiga, Francisco; Vitorino, Carla; Figueiras, Ana

    2018-06-20

    The pharmaceutical industry has entered in a new era, as there is a growing interest in increasing the quality standards of dosage forms, through the implementation of more structured development and manufacturing approaches. For many decades, the manufacturing of drug products was controlled by a regulatory framework to guarantee the quality of the final product through a fixed process and exhaustive testing. Limitations related to the Quality by Test (QbT) system have been widely acknowledged. The emergence of Quality by Design (QbD) as a systematic and risk-based approach introduced a new quality concept based on a good understanding of how raw materials and process parameters influence the final quality profile. Although the QbD system has been recognized as a revolutionary approach to product development and manufacturing, its full implementation in the pharmaceutical field is still limited. This is particularly evident in the case of semisolid complex formulation development. The present review aims at establishing a practical QbD framework to describe all stages comprised in the pharmaceutical development of a conventional cream in a comprehensible manner. Copyright © 2018. Published by Elsevier Inc.

  13. An adaptive reconstruction for Lagrangian, direct-forcing, immersed-boundary methods

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Vanella, Marcos; Balaras, Elias

    2017-12-01

    Lagrangian, direct-forcing, immersed boundary (IB) methods have been receiving increased attention due to their robustness in complex fluid-structure interaction problems. They are very sensitive, however, on the selection of the Lagrangian grid, which is typically used to define a solid or flexible body immersed in a fluid flow. In the present work we propose a cost-efficient solution to this problem without compromising accuracy. Central to our approach is the use of isoparametric mapping to bridge the relative resolution requirements of Lagrangian IB, and Eulerian grids. With this approach, the density of surface Lagrangian markers, which is essential to properly enforce boundary conditions, is adapted dynamically based on the characteristics of the underlying Eulerian grid. The markers are not stored and the Lagrangian data-structure is not modified. The proposed scheme is implemented in the framework of a moving least squares reconstruction formulation, but it can be adapted to any Lagrangian, direct-forcing formulation. The accuracy and robustness of the approach is demonstrated in a variety of test cases of increasing complexity.

  14. A hybrid formulation for the numerical simulation of condensed phase explosives

    NASA Astrophysics Data System (ADS)

    Michael, L.; Nikiforakis, N.

    2016-07-01

    In this article we present a new formulation and an associated numerical algorithm, for the simulation of combustion and transition to detonation of condensed-phase commercial- and military-grade explosives, which are confined by (or in general interacting with one or more) compliant inert materials. Examples include confined rate-stick problems and interaction of shock waves with gas cavities or solid particles in explosives. This formulation is based on an augmented Euler approach to account for the mixture of the explosive and its products, and a multi-phase diffuse interface approach to solve for the immiscible interaction between the mixture and the inert materials, so it is in essence a hybrid (augmented Euler and multi-phase) model. As such, it has many of the desirable features of the two approaches and, critically for our applications of interest, it provides the accurate recovery of temperature fields across all components. Moreover, it conveys a lot more physical information than augmented Euler, without the complexity of full multi-phase Baer-Nunziato-type models or the lack of robustness of augmented Euler models in the presence of more than two components. The model can sustain large density differences across material interfaces without the presence of spurious oscillations in velocity and pressure, and it can accommodate realistic equations of state and arbitrary (pressure- or temperature-based) reaction-rate laws. Under certain conditions, we show that the formulation reduces to well-known augmented Euler or multi-phase models, which have been extensively validated and used in practice. The full hybrid model and its reduced forms are validated against problems with exact (or independently-verified numerical) solutions and evaluated for robustness for rate-stick and shock-induced cavity collapse case-studies.

  15. Towards a Neurodevelopmental Model of Clinical Case Formulation

    PubMed Central

    Solomon, Marjorie; Hessl, David; Chiu, Sufen; Olsen, Emily; Hendren, Robert

    2009-01-01

    Rapid advances in molecular genetics and neuroimaging over the last 10-20 years have been a catalyst for research in neurobiology, developmental psychopathology, and translational neuroscience. Methods of study in psychiatry, previously described as “slow maturing,” now are becoming sufficiently sophisticated to more effectively investigate the biology of higher mental processes. Despite these technological advances, the recognition that psychiatric disorders are disorders of neurodevelopment, and the importance of case formulation to clinical practice, a neurodevelopmental model of case formulation has not yet been articulated. The goals of this manuscript, which is organized as a clinical case conference, are to begin to articulate a neurodevelopmental model of case formulation, to illustrate its value, and finally to explore how clinical psychiatric practice might evolve in the future if this model were employed. PMID:19248925

  16. ‘Patient-Centered Care’ for Complex Patients with Type 2 Diabetes Mellitus—Analysis of Two Cases

    PubMed Central

    Hackel, Jennifer M.

    2013-01-01

    Purpose This paper serves to apply and compare aspects of person centered care and recent consensus guidelines to two cases of older adults with poorly controlled diabetes in the context of relatively similar multimorbidity. Methods After review of the literature regarding the shift from guidelines promoting tight control in diabetes management to individualized person centered care, as well as newer treatment approaches emerging in diabetes care, the newer guidelines and potential treatment approaches are applied to the cases. Results By delving into the clinical, behavioral, social, cultural and economic aspects of the two cases in applying the new guidelines, divergent care goals are reached for the cases. Conclusions Primary care practitioners must be vigilant in providing individualized diabetes treatment where multiple chronic illnesses increase the complexity of care. While two older adults with multimorbidity may appear at first to have similar care goals, their unique preferences and support systems, as well as their risks and benefits from tight control, must be carefully weighed in formulating the best approach. Newer pharmaceutical agents hold promise for improving the possibilities for better glycemic control with less self-care burden and risk of hypoglycemia. PMID:24250240

  17. Investigating an approach to the alliance based on interpersonal defense theory.

    PubMed

    Westerman, Michael A; Muran, J Christopher

    2017-09-01

    Notwithstanding consistent findings of significant relationships between the alliance and outcome, questions remain to be answered about the relatively small magnitude of those correlations, the mechanisms underlying the association, and how to conceptualize the alliance construct. We conducted a preliminary study of an approach to the alliance based on interpersonal defense theory, which is an interpersonal reconceptualization of defense processes, to investigate the promise of this alternative approach as a way to address the outstanding issues. We employed qualitative, theory-building case study methodology, closely examining alliance processes at four time points in the treatment of a case in terms of a case formulation based on interpersonal defense theory. The results suggested that our approach made it possible to recognize key processes in the alliance and that it helps explain how the alliance influences outcome. Our analyses also provided a rich set of concrete illustrations of the alliance phenomena identified by the theory. The findings suggest that an approach to the alliance based on interpersonal defense theory holds promise. However, although the qualitative method we employed has advantages, it also has limitations. We offer suggestions about how future qualitative and quantitative investigations could build on this study.

  18. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  19. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  20. A note on the uniqueness of 2D elastostatic problems formulated by different types of potential functions

    NASA Astrophysics Data System (ADS)

    Guerrero, José Luis Morales; Vidal, Manuel Cánovas; Nicolás, José Andrés Moreno; López, Francisco Alhama

    2018-05-01

    New additional conditions required for the uniqueness of the 2D elastostatic problems formulated in terms of potential functions for the derived Papkovich-Neuber representations, are studied. Two cases are considered, each of them formulated by the scalar potential function plus one of the rectangular non-zero components of the vector potential function. For these formulations, in addition to the original (physical) boundary conditions, two new additional conditions are required. In addition, for the complete Papkovich-Neuber formulation, expressed by the scalar potential plus two components of the vector potential, the additional conditions established previously for the three-dimensional case in z-convex domain can be applied. To show the usefulness of these new conditions in a numerical scheme two applications are numerically solved by the network method for the three cases of potential formulations.

  1. How Health in All Policies are developed and implemented in a developing country? A case study of a HiAP initiative in Iran.

    PubMed

    Khayatzadeh-Mahani, Akram; Sedoghi, Zeynab; Mehrolhassani, Mohammad Hossein; Yazdi-Feyzabadi, Vahid

    2016-12-01

    Population health is influenced by many factors beyond the control of health system which should be addressed by other sectors through inter-sectoral collaboration (ISC). Countries have adopted diverse initiatives to operationalize ISC for health such as establishment of Councils of Health and Food Security (CHFSs) and development of provincial Health Master Plans (HMPs) in Iran. The literature, however, provides meager information on how these initiatives have been moved into the top policy agenda, how and by whom they have been formulated and what factors enable or inhibit their implementation. In addressing these knowledge gaps, we employed a qualitative case study approach, incorporating mixed methods: in-depth interviews and a textual analysis of policy documents. Iran founded the Supreme Council of Health and Food Security (SCHFS) at national level in 2006 followed by provincial and district CHFSs to ensure political commitment to ISC for health and Health in All Policies (HiAPs). In 2009, the SCHFS mandated all provincial CHFSs across the country to develop provincial HMP to operationalize the HiAP approach and Kerman was among the first provinces which responded to this call. We selected Kerman province HMP as a case study to investigate the research questions raised in this study. The study revealed two types of leverage, which played crucial role in agenda setting, policy formulation and implementation of HMP including politics (political commitment) and policy entrepreneurs. The multiple streams model was found to be informative for thinking about different stages of a policy cycle including agenda setting, policy formulation and policy implementation. It was also found to be a useful framework in analyzing HiAP initiatives as these policies do not smoothly and readily reach the policy agenda. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Matrix cracking in laminated composites under monotonic and cyclic loadings

    NASA Technical Reports Server (NTRS)

    Allen, David H.; Lee, Jong-Won

    1991-01-01

    An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.

  3. Case formulation and the therapeutic relationship: the role of therapist self-reflection and self-revelation.

    PubMed

    Tufekcioglu, Sumru; Muran, J Christopher

    2015-05-01

    This article examines the role of the therapist's self-reflection and self-revelation in case formulation. We believe that a collaboratively constructed case formulation must always be considered in the context of an evolving therapeutic relationship. Further, self-reflection and self-revelation on the part of the therapist are critical for a more elaborate and nuanced case formulation and for understanding the patient. This highlights the importance of attunement to the here and now and the evolving therapeutic relationship. From this attunement, the therapist's self-reflection and self-revelation can emerge further, which can lead to the patient's personal growth and increased self-other awareness. To illustrate our point, we present an integrative, relational model in the case of a patient who has been in treatment. © 2015 Wiley Periodicals, Inc.

  4. Micromechanics of transformation fields in ageing linear viscoelastic composites: effects of phase dissolution or precipitation

    NASA Astrophysics Data System (ADS)

    Honorio, Tulio

    2017-11-01

    Transformation fields, in an affine formulation characterizing mechanical behavior, describe a variety of physical phenomena regardless their origin. Different composites, notably geomaterials, present a viscoelastic behavior, which is, in some cases of industrial interest, ageing, i.e. it evolves independently with respect to time and loading time. Here, a general formulation of the micromechanics of prestressed or prestrained composites in Ageing Linear Viscoelasticity (ALV) is presented. Emphasis is put on the estimation of effective transformation fields in ALV. The result generalizes Ageing Linear Thermo- and Poro-Viscoelasticity and it can be used in approaches coping with a phase transformation. Additionally, the results are extended to the case of locally transforming materials due to non-coupled dissolution and/or precipitation of a given (elastic or viscoelastic) phase. The estimations of locally transforming composites can be made with respect to different morphologies. As an application, estimations of the coefficient of thermal expansion of a hydrating alite paste are presented.

  5. The Quality of Psychotherapy Case Formulations: A Comparison of Expert, Experienced, and Novice Cognitive-Behavioral and Psychodynamic Therapists

    ERIC Educational Resources Information Center

    Eells, Tracy D.; Lombart, Kenneth G.; Kendjelic, Edward M.; Turner, L. Carolyn; Lucas, Cynthia P.

    2005-01-01

    Sixty-five expert, experienced, and novice cognitive-behavioral and psychodynamic psychotherapists provided "think aloud" case formulations in response to 6 standardized patient vignettes varying in disorder and prototypicality. The 390 formulations were reliably transcribed, segmented into idea units, content coded, and rated on multiple…

  6. Dynamic coupling of subsurface and seepage flows solved within a regularized partition formulation

    NASA Astrophysics Data System (ADS)

    Marçais, J.; de Dreuzy, J.-R.; Erhel, J.

    2017-11-01

    Hillslope response to precipitations is characterized by sharp transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Locally, the transition between these two regimes is triggered by soil saturation. Here we develop an integrative approach to simultaneously solve the subsurface flow, locate the potential fully saturated areas and deduce the generated saturation excess overland flow. This approach combines the different dynamics and transitions in a single partition formulation using discontinuous functions. We propose to regularize the system of partial differential equations and to use classic spatial and temporal discretization schemes. We illustrate our methodology on the 1D hillslope storage Boussinesq equations (Troch et al., 2003). We first validate the numerical scheme on previous numerical experiments without saturation excess overland flow. Then we apply our model to a test case with dynamic transitions from purely subsurface flow dynamics to simultaneous surface and subsurface flows. Our results show that discretization respects mass balance both locally and globally, converges when the mesh or time step are refined. Moreover the regularization parameter can be taken small enough to ensure accuracy without suffering of numerical artefacts. Applied to some hundreds of realistic hillslope cases taken from Western side of France (Brittany), the developed method appears to be robust and efficient.

  7. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  8. The role of control groups in mutagenicity studies: matching biological and statistical relevance.

    PubMed

    Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane

    2003-06-01

    The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.

  9. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  10. Hyper-resolution hydrological modeling: Completeness of Formulation, Appropriateness of Descritization, and Physical LImits of Predictability

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.

    2017-12-01

    HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.

  11. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    NASA Astrophysics Data System (ADS)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  12. Congenital dacryocystocele.

    PubMed

    Harris, G J; DiClementi, D

    1982-11-01

    Four cases of congenital lacrimal sac distention were managed in an initially conservative manner to further elucidate the natural history of the condition and to formulate a more systematic approach to its treatment. In three cases, the abnormality resolved without nasolacrimal duct probing, with no adverse sequelae. In one case, dacryocystitis caused by Serratia marcescens, corneal astigmatism, and severe canthal distortion prompted surgical intervention. The management of individual cases of dacryocystocele should be influenced by the presence of inflammation, the virulence of any infecting organisms, the induction of astigmatism and anisometropia, and the degree of canthal distortion. Dacryocystocele appears to be a more specific term for lacrimal sac distention than either amniotocele or mucocele, and is not restricted to only one source of its fluid contents.

  13. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  14. Reconfigurable Flight Control Designs With Application to the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zhenglu

    1999-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  15. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangularmore » AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.« less

  16. Pathways of paracetamol absorption from layered excipient suppositories: artificial intelligence approach.

    PubMed

    Belic, A; Grabnar, I; Karba, R; Mrhar, A

    2003-01-01

    When studying paracetamol availability after rectal administration, the differences between slower and faster release suppositories were discovered. Approach with modelling and simulation of compartment-based models was used to explore the differences. A study of paracetamol from layered excipient suppositories shows that many different mechanisms are involved in the drug pharmacokinetics. There is also a large number of articles, each dealing with only one or with a few of the mechanisms. However, there is little information available on how the mechanisms interact in the organism and thus govern the pharmacokinetics of the drug, which means that systemic view in the expert knowledge is missing. In the case of paracetamol rectal availability the use of partially fuzzyfied model allowed systemic combination of all described mechanisms found in the literature and measured data. In spite of non-identifiability, the model showed that patterns that explained differences in bioavailabilities of the two formulations of suppositories could be found. Results of modelling and simulation show that "in vivo" there is practically no difference in cumulative release profiles between the two formulations. However, due to higher content of mono-di-glycerides in a slower release formulation, the extent of absorption is augmented both by absorption-enhancing effect of mono-di-glycerides and the liver bypass mechanism via diminished viscosity.

  17. Comparison of the cohesion-adhesion balance approach to colloidal probe atomic force microscopy and the measurement of Hansen partial solubility parameters by inverse gas chromatography for the prediction of dry powder inhalation performance.

    PubMed

    Jones, Matthew D; Buckton, Graham

    2016-07-25

    The abilities of the cohesive-adhesive balance approach to atomic force microscopy (AFM) and the measurement of Hansen partial solubility parameters by inverse gas chromatography (IGC) to predict the performance of carrier-based dry powder inhaler (DPI) formulations were compared. Five model drugs (beclometasone dipropionate, budesonide, salbutamol sulphate, terbutaline sulphate and triamcinolone acetonide) and three model carriers (erythritol, α-lactose monohydrate and d-mannitol) were chosen, giving fifteen drug-carrier combinations. Comparison of the AFM and IGC interparticulate adhesion data suggested that they did not produce equivalent results. Comparison of the AFM data with the in vitro fine particle delivery of appropriate DPI formulations normalised to account for particle size differences revealed a previously observed pattern for the AFM measurements, with a slightly cohesive AFM CAB ratio being associated with the highest fine particle fraction. However, no consistent relationship between formulation performance and the IGC data was observed. The results as a whole highlight the complexity of the many interacting variables that can affect the behaviour of DPIs and suggest that the prediction of their performance from a single measurement is unlikely to be successful in every case. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Drug utilization review of potassium chloride injection formulations available in a private hospital in kuching, sarawak, malaysia.

    PubMed

    Melissa, Mohammad Hirman; Azmi, Sarriff

    2013-07-01

    The concentrated potassium chloride injection is a high-alert medication and replacing it with a pre-mixed formulation can reduce the risks associated with its use. The aim of this study was to determine the clinical characteristics of patients receiving different potassium chloride formulations available at a private institution. The study also assessed the effectiveness and safety of pre-mixed formulations in the correction of hypokalaemia. This was a retrospective observational study consisting of 296 cases using concentrated and pre-mixed potassium chloride injections in 2011 in a private hospital in Kuching, Sarawak, Malaysia. There were 135 (45.6%) cases that received concentrated potassium chloride, and 161 (54.4%) cases that received pre-mixed formulations. The patients' clinical characteristics that were significantly related to the utilization of the different formulations were diagnosis (P < 0.001), potassium serum blood concentration (P < 0.05), and fluid overload risk (P < 0.05). The difference observed for the cases that achieved or maintained normokalaemia was statistically insignificant (P = 0.172). Infusion-related adverse effects were seen more in pre-mixes compared to concentrated formulations (6.8% versus 2.2%, P < 0.05). This study provides insight into the utilization of potassium chloride injections at this specific institution. The results support current recommendations to use pre-mixed formulations whenever possible.

  19. Driving for successful change processes in healthcare by putting staff at the wheel.

    PubMed

    Erlingsdottir, Gudbjörg; Ersson, Anders; Borell, Jonas; Rydenfält, Christofer

    2018-03-19

    Purpose The purpose of this paper is to describe five salient factors that emerge in two successful change processes in healthcare. Organizational changes in healthcare are often characterized by problems and solutions that have been formulated by higher levels of management. This top-down management approach has not been well received by the professional community. As a result, improvement processes are frequently abandoned, resulting in disrupted and dysfunctional organizations. This paper presents two successful change processes where managerial leadership was used to coach the change processes by distributing mandates and resources. After being managerially initiated, both processes were driven by local agency, decisions, planning and engagement. Design/methodology/approach The data in the paper derive from two qualitative case studies. Data were collected through in-depth interviews, observations and document studies. The cases are presented as process descriptions covering the different phases of the change processes. The focus in the studies is on the roles and interactions of the actors involved, the type of leadership and the distribution of agency. Findings Five factors emerged as paramount to the successful change processes in the two cases: local ownership of problems; a coached process where management initiates the change process and the problem recognition, and then lets the staff define the problems, formulate solutions and drive necessary changes; distributed leadership directed at enabling and supporting the staff's intentions and long-term self-leadership; mutually formulated norms and values that serve as a unifying force for the staff; and generous time allocation and planning, which allows the process to take time, and creates room for reevaluation. The authors also noted that in both cases, reorganization into multi-professional teams lent stability and endurance to the completed changes. Originality/value The research shows how management can initiate and support successful change processes that are staff driven and characterized by local agency, decisions, planning and engagement. Empirical descriptions of successful change processes are rare, which is why the description of such processes in this research increases the value of the paper.

  20. Accelerating Vaccine Formulation Development Using Design of Experiment Stability Studies.

    PubMed

    Ahl, Patrick L; Mensch, Christopher; Hu, Binghua; Pixley, Heidi; Zhang, Lan; Dieter, Lance; Russell, Ryann; Smith, William J; Przysiecki, Craig; Kosinski, Mike; Blue, Jeffrey T

    2016-10-01

    Vaccine drug product thermal stability often depends on formulation input factors and how they interact. Scientific understanding and professional experience typically allows vaccine formulators to accurately predict the thermal stability output based on formulation input factors such as pH, ionic strength, and excipients. Thermal stability predictions, however, are not enough for regulators. Stability claims must be supported by experimental data. The Quality by Design approach of Design of Experiment (DoE) is well suited to describe formulation outputs such as thermal stability in terms of formulation input factors. A DoE approach particularly at elevated temperatures that induce accelerated degradation can provide empirical understanding of how vaccine formulation input factors and interactions affect vaccine stability output performance. This is possible even when clear scientific understanding of particular formulation stability mechanisms are lacking. A DoE approach was used in an accelerated 37(°)C stability study of an aluminum adjuvant Neisseria meningitidis serogroup B vaccine. Formulation stability differences were identified after only 15 days into the study. We believe this study demonstrates the power of combining DoE methodology with accelerated stress stability studies to accelerate and improve vaccine formulation development programs particularly during the preformulation stage. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. Bézier B¯ projection

    NASA Astrophysics Data System (ADS)

    Miao, Di; Borden, Michael J.; Scott, Michael A.; Thomas, Derek C.

    2018-06-01

    In this paper we demonstrate the use of B\\'{e}zier projection to alleviate locking phenomena in structural mechanics applications of isogeometric analysis. Interpreting the well-known $\\bar{B}$ projection in two different ways we develop two formulations for locking problems in beams and nearly incompressible elastic solids. One formulation leads to a sparse symmetric symmetric system and the other leads to a sparse non-symmetric system. To demonstrate the utility of B\\'{e}zier projection for both geometry and material locking phenomena we focus on transverse shear locking in Timoshenko beams and volumetric locking in nearly compressible linear elasticity although the approach can be applied generally to other types of locking phenemona as well. B\\'{e}zier projection is a local projection technique with optimal approximation properties, which in many cases produces solutions that are comparable to global $L^2$ projection. In the context of $\\bar{B}$ methods, the use of B\\'ezier projection produces sparse stiffness matrices with only a slight increase in bandwidth when compared to standard displacement-based methods. Of particular importance is that the approach is applicable to any spline representation that can be written in B\\'ezier form like NURBS, T-splines, LR-splines, etc. We discuss in detail how to integrate this approach into an existing finite element framework with minimal disruption through the use of B\\'ezier extraction operators and a newly introduced dual basis for the B\\'{e}zierprojection operator. We then demonstrate the behavior of the two proposed formulations through several challenging benchmark problems.

  2. On Displacement Height, from Classical to Practical Formulation: Stress, Turbulent Transport and Vorticity Considerations

    NASA Astrophysics Data System (ADS)

    Sogachev, Andrey; Kelly, Mark

    2016-03-01

    Displacement height ( d) is an important parameter in the simple modelling of wind speed and vertical fluxes above vegetative canopies, such as forests. Here we show that, aside from implicit definition through a (displaced) logarithmic profile, accepted formulations for d do not consistently predict flow properties above a forest. Turbulent transport can affect the displacement height, and is an integral part of what is called the roughness sublayer. We develop a more general approach for estimation of d, through production of turbulent kinetic energy and turbulent transport, and show how previous stress-based formulations for displacement height can be seen as simplified cases of a more general definition including turbulent transport. Further, we also give a simplified and practical form for d that is in agreement with the general approach, exploiting the concept of vortex thickness scale from mixing-layer theory. We assess the new and previous displacement height formulations by using flow statistics derived from the atmospheric boundary-layer Reynolds-averaged Navier-Stokes model SCADIS as well as from wind-tunnel observations, for different vegetation types and flow regimes in neutral conditions. The new formulations tend to produce smaller d than stress-based forms, falling closer to the classic logarithmically-defined displacement height. The new, more generally defined, displacement height appears to be more compatible with profiles of components of the turbulent kinetic energy budget, accounting for the combined effects of turbulent transport and shear production. The Coriolis force also plays a role, introducing wind-speed dependence into the behaviour of the roughness sublayer; this affects the turbulent transport, shear production, stress, and wind speed, as well as the displacement height, depending on the character of the forest. We further show how our practical (`mixing-layer') form for d matches the new turbulence-based relation, as well as correspondence to previous (stress-based) formulations.

  3. Introduction: Applying Clinical Psychological Science to Practice.

    PubMed

    Cha, Christine B; DiVasto, Katherine A

    2017-05-01

    Mental illness is a prevalent and extraordinarily complex phenomenon. Psychologists have developed distinct approaches toward understanding and treating mental illness, rooted in divergent epistemology. This introduction to the Special Issue on Clinical Psychological Science and Practice provides a brief overview of the scientist-practitioner gap, and explores one step (of many) toward bridging this divide. Seven compelling case illustrations featured in this Special Issue apply empirical findings to case formulation, treatment selection, and assessment across complex and varied clinical presentations. This issue thereby demonstrates the feasibility of integrating research and clinical expertise in mental healthcare. © 2017 Wiley Periodicals, Inc.

  4. Conjectures on the relations of linking and causality in causally simple spacetimes

    NASA Astrophysics Data System (ADS)

    Chernov, Vladimir

    2018-05-01

    We formulate the generalization of the Legendrian Low conjecture of Natario and Tod (proved by Nemirovski and myself before) to the case of causally simple spacetimes. We prove a weakened version of the corresponding statement. In all known examples, a causally simple spacetime can be conformally embedded as an open subset into some globally hyperbolic and the space of light rays in is an open submanifold of the space of light rays in . If this is always the case, this provides an approach to solving the conjectures relating causality and linking in causally simple spacetimes.

  5. The path integral on the pseudosphere

    NASA Astrophysics Data System (ADS)

    Grosche, C.; Steiner, F.

    1988-02-01

    A rigorous path integral treatment for the d-dimensional pseudosphere Λd-1 , a Riemannian manifold of constant negative curvature, is presented. The path integral formulation is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined on midpoints. The time-dependent and energy-dependent Feynman kernels obtain different expressions in the even- and odd-dimensional cases, respectively. The special case of the three-dimensional pseudosphere, which is analytically equivalent to the Poincaré upper half plane, the Poincaré disc, and the hyperbolic strip, is discussed in detail including the energy spectrum and the normalised wave-functions.

  6. The thermal response of HMX-TATB charges

    NASA Astrophysics Data System (ADS)

    Drake, R. C.

    2017-01-01

    One approach to achieving charge safety and performance requirements is to prepare formulations containing two (or more) explosives. The intention of this approach is that by judicious choice of explosives and binder the formulation will have the desirable features of the constituent materials. HMX and TATB have very different properties. In an attempt to achieve a formulation which has the safety and performance characteristics of TATB and HMX, respectively, a range of formulations were prepared. The thermal response of the formulations were measured in the One-Dimensional Time To Explosion (ODTX) configuration and compared to those of formulations containing only HMX and TATB. The response of the mixed formulations was found to be largely determined by the HMX component with the binder making a small contribution. A formulation with a Kel-F 800 binder had a much higher critical temperature than would have been expected based on the critical temperatures of formulations with HTPB-IPDI as the binder.

  7. Physiologically Based Absorption Modeling to Impact Biopharmaceutics and Formulation Strategies in Drug Development-Industry Case Studies.

    PubMed

    Kesisoglou, Filippos; Chung, John; van Asperen, Judith; Heimbach, Tycho

    2016-09-01

    In recent years, there has been a significant increase in use of physiologically based pharmacokinetic models in drug development and regulatory applications. Although most of the published examples have focused on aspects such as first-in-human (FIH) dose predictions or drug-drug interactions, several publications have highlighted the application of these models in the biopharmaceutics field and their use to inform formulation development. In this report, we present 5 case studies of use of such models in this biopharmaceutics/formulation space across different pharmaceutical companies. The case studies cover different aspects of biopharmaceutics or formulation questions including (1) prediction of absorption prior to FIH studies; (2) optimization of formulation and dissolution method post-FIH data; (3) early exploration of a modified-release formulation; (4) addressing bridging questions for late-stage formulation changes; and (5) prediction of pharmacokinetics in the fed state for a Biopharmaceutics Classification System class I drug with fasted state data. The discussion of the case studies focuses on how such models can facilitate decisions and biopharmaceutic understanding of drug candidates and the opportunities for increased use and acceptance of such models in drug development and regulatory interactions. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  8. 9th International Symposium on the Biosafety of Genetically Modified Organisms. Session II: Identifying and defining hazards and potential consequences I: Concepts for problem formulation and non-target risk assessment.

    PubMed

    Bigler, Franz

    2006-01-01

    The scientific organizers of the symposium put much emphasis on the identification and definition of hazard and the potential consequences thereof and three full sessions with a total of 13 presentations encompassing a wide range of related themes were planned for this topic. Unfortunately, one talk had to be cancelled because of illness of the speaker (BM Khadi, India). Some presentations covered conceptual approaches for environmental risk assessment (ERA) of GM plants (problem formulation in the risk assessment framework, familiarity approach, tiered and methodological frameworks, non-target risk assessment) and the use of models in assessing invasiveness and weediness of GM plants. Other presentations highlighted the lessons learned for future ERA from case studies and commercialized GM crops, and from monitoring of unintended releases to the environment. When the moderators of the three sessions came together after the presentations to align their summaries, there was an obvious need to restructure the 12 presentations in a way that allowed for a consistent summarizing discussion. The following new organization of the 12 talks was chosen: (1) Concepts for problem formulation and non-target risk assessment, (2) Modeling as a tool for predicting invasiveness of GM plants, (3) Case-studies of ERA of large-scale release, (4) Lessons learned for ERA from a commercialized GM plant, (5) Monitoring of unintended release of Bt maize in Mexico. The new thematic structure facilitates a more in-depth discussion of the presentations related to a specific topic, and the conclusions to be drawn are thus more consistent. Each moderator agreed to take responsibility for summarizing one or more themes and to prepare the respective report.

  9. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  10. Mixture experiment methods in the development and optimization of microemulsion formulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.

    2011-06-25

    Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less

  11. Model-Based Heterogeneous Data Fusion for Reliable Force Estimation in Dynamic Structures under Uncertainties

    PubMed Central

    Khodabandeloo, Babak; Melvin, Dyan; Jo, Hongki

    2017-01-01

    Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well. PMID:29149088

  12. Critical attributes of transdermal drug delivery system (TDDS)--a generic product development review.

    PubMed

    Ruby, P K; Pathak, Shriram M; Aggarwal, Deepika

    2014-11-01

    Bioequivalence testing of transdermal drug delivery systems (TDDS) has always been a subject of high concern for generic companies due to the formulation complexity and the fact that they are subtle to even minor manufacturing differences and hence should be clearly qualified in terms of quality, safety and efficacy. In recent times bioequivalence testing of transdermal patches has gained a global attention and many regulatory authorities worldwide have issued recommendations to set specific framework for demonstrating equivalence between two products. These current regulatory procedures demand a complete characterization of the generic formulation in terms of its physicochemical sameness, pharmacokinetics disposition, residual content and/or skin irritation/sensitization testing with respect to the reference formulation. This paper intends to highlight critical in vitro tests in assessing the therapeutic equivalence of products and also outlines their valuable applications in generic product success. Understanding these critical in vitro parameters can probably help to decode the complex bioequivalence outcomes, directing the generic companies to optimize the formulation design in reduced time intervals. It is difficult to summarize a common platform which covers all possible transdermal products; hence few case studies based on this approach has been presented in this review.

  13. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  14. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  15. Traction-free vibrations of finite trigonal elastic cylinders.

    PubMed

    Heyliger, Paul R; Johnson, Ward L

    2003-04-01

    The unrestrained, traction-free vibrations of finite elastic cylinders with trigonal material symmetry are studied using two approaches, based on the Ritz method, which formulate the weak form of the equations of motion in cylindrical and rectangular coordinates. Elements of group theory are used to divide approximation functions into orthogonal subsets, thus reducing the size of the computational problem and classifying the general symmetries of the vibrational modes. Results for the special case of an isotropic cylinder are presented and compared with values published by other researchers. For the isotropic case, the relative accuracy of the formulations in cylindrical and rectangular coordinates can be evaluated, because exact analytical solutions are known for the torsional modes. The calculation in cylindrical coordinates is found to be more accurate for a given number of terms in the series approximation functions. For a representative trigonal material, langatate, calculations of the resonant frequencies and the sensitivity of the frequencies on each of the elastic constants are presented. The dependence on geometry (ratio of length to diameter) is briefly explored. The special case of a transversely isotropic cylinder (with the elastic stiffness C14 equal to zero) is also considered.

  16. Towards a methodology to formulate sustainable diets for livestock: accounting for environmental impact in diet formulation.

    PubMed

    Mackenzie, S G; Leinonen, I; Ferguson, N; Kyriazakis, I

    2016-05-28

    The objective of this study was to develop a novel methodology that enables pig diets to be formulated explicitly for environmental impact objectives using a Life Cycle Assessment (LCA) approach. To achieve this, the following methodological issues had to be addressed: (1) account for environmental impacts caused by both ingredient choice and nutrient excretion, (2) formulate diets for multiple environmental impact objectives and (3) allow flexibility to identify the optimal nutritional composition for each environmental impact objective. An LCA model based on Canadian pig farms was integrated into a diet formulation tool to compare the use of different ingredients in Eastern and Western Canada. By allowing the feed energy content to vary, it was possible to identify the optimum energy density for different environmental impact objectives, while accounting for the expected effect of energy density on feed intake. A least-cost diet was compared with diets formulated to minimise the following objectives: non-renewable resource use, acidification potential, eutrophication potential, global warming potential and a combined environmental impact score (using these four categories). The resulting environmental impacts were compared using parallel Monte Carlo simulations to account for shared uncertainty. When optimising diets to minimise a single environmental impact category, reductions in the said category were observed in all cases. However, this was at the expense of increasing the impact in other categories and higher dietary costs. The methodology can identify nutritional strategies to minimise environmental impacts, such as increasing the nutritional density of the diets, compared with the least-cost formulation.

  17. Drug nanoparticles: formulating poorly water-soluble compounds.

    PubMed

    Merisko-Liversidge, Elaine M; Liversidge, Gary G

    2008-01-01

    More than 40% of compounds identified through combinatorial screening programs are poorly soluble in water. These molecules are difficult to formulate using conventional approaches and are associated with innumerable formulation-related performance issues. Formulating these compounds as pure drug nanoparticles is one of the newer drug-delivery strategies applied to this class of molecules. Nanoparticle dispersions are stable and have a mean diameter of less than 1 micron. The formulations consist of water, drug, and one or more generally regarded as safe excipients. These liquid dispersions exhibit an acceptable shelf-life and can be postprocessed into various types of solid dosage forms. Drug nanoparticles have been shown to improve bioavailability and enhance drug exposure for oral and parenteral dosage forms. Suitable formulations for the most commonly used routes of administration can be identified with milligram quantities of drug substance, providing the discovery scientist with an alternate avenue for screening and identifying superior analogs. For the toxicologist, the approach provides a means for dose escalation using a formulation that is commercially viable. In the past few years, formulating poorly water-soluble compounds using a nanoparticulate approach has evolved from a conception to a realization whose versatility and applicability are just beginning to be realized.

  18. Comparison of two transonic noise prediction formulations using the aircraft noise prediction program

    NASA Technical Reports Server (NTRS)

    Spence, Peter L.

    1987-01-01

    This paper addresses recently completed work on using Farassat's Formulation 3 noise prediction code with the Aircraft Noise Prediction Program (ANOPP). Software was written to link aerodynamic loading generated by the Propeller Loading (PLD) module within ANOPP with formulation 3. Included are results of comparisons between Formulation 3 with ANOPP's existing noise prediction modules, Subsonic Propeller Noise (SPN) and Transonic Propeller Noise (TPN). Four case studies are investigated. Results of the comparison studies show excellent agreement for the subsonic cases. Differences found in the comparisons made under transonic conditions are strictly numerical and can be explained by the way in which the time derivative is calculated in Formulation 3. Also included is a section on how to execute Formulation 3 with ANOPP.

  19. On the convergence of the coupled-wave approach for lamellar diffraction gratings

    NASA Technical Reports Server (NTRS)

    Li, Lifeng; Haggans, Charles W.

    1992-01-01

    Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.

  20. Two volume integral equations for the inhomogeneous and anisotropic forward problem in electroencephalography

    NASA Astrophysics Data System (ADS)

    Rahmouni, Lyes; Mitharwal, Rajendra; Andriulli, Francesco P.

    2017-11-01

    This work presents two new volume integral equations for the Electroencephalography (EEG) forward problem which, differently from the standard integral approaches in the domain, can handle heterogeneities and anisotropies of the head/brain conductivity profiles. The new formulations translate to the quasi-static regime some volume integral equation strategies that have been successfully applied to high frequency electromagnetic scattering problems. This has been obtained by extending, to the volume case, the two classical surface integral formulations used in EEG imaging and by introducing an extra surface equation, in addition to the volume ones, to properly handle boundary conditions. Numerical results corroborate theoretical treatments, showing the competitiveness of our new schemes over existing techniques and qualifying them as a valid alternative to differential equation based methods.

  1. Resource allocation in road infrastructure using ANP priorities with ZOGP formulation-A case study

    NASA Astrophysics Data System (ADS)

    Alias, Suriana; Adna, Norfarziah; Soid, Siti Khuzaimah; Kardri, Mahani

    2013-09-01

    Road Infrastructure (RI) project evaluation and selection is concern with the allocation of scarce organizational resources. In this paper, it is suggest an improved RI project selection methodology which reflects interdependencies among evaluation criteria and candidate projects. Fuzzy Delphi Method (FDM) is use to evoking expert group opinion and also to determine a degree of interdependences relationship between the alternative projects. In order to provide a systematic approach to set priorities among multi-criteria and trade-off among objectives, Analytic Network Process (ANP) is suggested to be applied prior to Zero-One Goal Programming (ZOGP) formulation. Specifically, this paper demonstrated how to combined FDM and ANP with ZOGP through a real-world RI empirical example on an ongoing decision-making project in Johor, Malaysia.

  2. Understanding Through Use: Elderly's Value Identification in a Service Experience.

    PubMed

    Lindenfalk, Bertil; Vimarlund, Vivian

    2017-01-01

    This paper uses a qualitative approach, specifically; narrative analysis, to contextualize user's formulation of an understanding of a personalized meal planning service within the ambient assisted living domain. By focusing on how user's, in this case elderly over 65, formed an understanding of a service, and, what they thought valuable in using the service, based on their understanding. The results indicate how user's compare their initial understanding to their experienced understanding, formed during usage, and how this affects their value formulation of specific service aspects. The paper gives not only provides valuable insight into contextualizing aspects of health and wellness services, but to aspects of importance for implementation, by showing how value aspects of services from a user perspective are important to consider during these processes.

  3. Solutions to Three-Dimensional Thin-Layer Navier-Stokes Equations in Rotating Coordinates for Flow Through Turbomachinery

    NASA Technical Reports Server (NTRS)

    Ghosh, Amrit Raj

    1996-01-01

    The viscous, Navier-Stokes solver for turbomachinery applications, MSUTC has been modified to include the rotating frame formulation. The three-dimensional thin-layer Navier-Stokes equations have been cast in a rotating Cartesian frame enabling the freezing of grid motion. This also allows the flow-field associated with an isolated rotor to be viewed as a steady-state problem. Consequently, local time stepping can be used to accelerate convergence. The formulation is validated by running NASA's Rotor 67 as the test case. results are compared between the rotating frame code and the absolute frame code. The use of the rotating frame approach greatly enhances the performance of the code with respect to savings in computing time, without degradation of the solution.

  4. A New Time Domain Formulation for Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Casper, J.; Farassat, F.

    2002-01-01

    A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specified from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  5. A New Time Domain Formulation for Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Casper, Jay H.; Farassat, Fereidoun

    2002-01-01

    A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specied from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.

  6. MARGINS: Toward a novel science plan

    NASA Astrophysics Data System (ADS)

    Mutter, John C.

    A science plan to study continental margins has been in the works for the past 3 years, with almost 200 Earth scientists from a wide variety of disciplines gathering at meetings and workshops. Most geological hazards and resources are found at continental margins, yet our understanding of the processes that shape the margins is meager.In formulating this MARGINS research initiative, fundamental issues concerning our understanding of basic Earth-forming processes have arisen. It is clear that a business-as-usual approach will not solve the class of problems defined by the MARGINS program; the solutions demand approaches different from those used in the past. In many cases, a different class of experiment will be required, one that is well beyond the capability of individual principle investigators to undertake on their own. In most cases, broadly based interdisciplinary studies will be needed.

  7. Written case formulations in the treatment of anorexia nervosa: Evidence for therapeutic benefits.

    PubMed

    Allen, Karina L; O'Hara, Caitlin B; Bartholdy, Savani; Renwick, Beth; Keyes, Alexandra; Lose, Anna; Kenyon, Martha; DeJong, Hannah; Broadbent, Hannah; Loomes, Rachel; McClelland, Jessica; Serpell, Lucy; Richards, Lorna; Johnson-Sabine, Eric; Boughton, Nicky; Whitehead, Linette; Treasure, Janet; Wade, Tracey; Schmidt, Ulrike

    2016-09-01

    Case formulation is a core component of many psychotherapies and formulation letters may provide an opportunity to enhance the therapeutic alliance and improve treatment outcomes. This study aimed to determine if formulation letters predict treatment satisfaction, session attendance, and symptom reductions in anorexia nervosa (AN). It was hypothesized that higher quality formulation letters would predict greater treatment satisfaction, a greater number of attended sessions, and greater improvement in eating disorder symptoms. Patients were adult outpatients with AN (n = 46) who received Maudsley Anorexia Nervosa Treatment for Adults (MANTRA) in the context of a clinical trial. A Case Formulation Rating Scheme was used to rate letters for adherence to the MANTRA model and use of a collaborative, reflective, affirming stance. Analyses included linear regression and mixed models. Formulation letters that paid attention to the development of the AN predicted greater treatment acceptability ratings (p = 0.002). More reflective and respectful letters predicted greater reductions in Eating Disorder Examination scores (p = 0.003). Results highlight the potential significance of a particular style of written formulation as part of treatment for AN. Future research should examine applicability to other psychiatric disorders. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2016; 49:874-882). © 2016 Wiley Periodicals, Inc.

  8. Use of Evidence-Based Decision-Making in Comprehensive Dental Treatment of a Patient with Meth Mouth-A Case Report.

    PubMed

    Al Hazzani, Saad A

    2017-06-01

    This case report illustrates the use of evidence-based practice in formulating a comprehensive dental treatment plan for a patient who presented himself with signs of oral health debilitation accompanying methamphetamine (MA) abuse called "meth mouth" with the goal of providing dental care practitioners in Saudi Arabia with an insight into the global problem of MA abuse and its impact on oral health. This report documents the case of a 22-year-old male patient who reported to the clinic with rampant caries caused due to MA abuse exacerbated by poor oral hygiene and smoking habit. The treatment plan of this present case was formulated on the lines of the evidence-based dentistry approach. A clinical question was composed based on the Problem, Intervention, Comparison, and Outcome format to identify past studies and case reports on meth mouth. A standard search was conducted on PubMed Central. Standard guidelines on the treatment of meth mouth were extracted from the Web site of the American Dental Association. A total of 2 systematic reviews, 7 review articles, 4 epidemiologic studies, 5 case reports, and 1 American Dental Association guideline were found. Accelerated dental decay leading to rampant caries in young and middle-aged adults is a characteristic oral finding in MA abusers. The most important factor that affects the prognosis of dental care is complete cessation of MA use by the patient. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Comparison of Local and Global Formulations of Thermodynamics

    ERIC Educational Resources Information Center

    DeVoe, Howard

    2013-01-01

    Several educators have advocated teaching thermodynamics using a"global" approach in place of the conventional "local" approach. This article uses four examples of experiments to illustrate the two formulations and the definitions of heat and work associated with them. Advantages and disadvantages of both approaches are…

  10. Refining search terms for nanotechnology

    NASA Astrophysics Data System (ADS)

    Porter, Alan L.; Youtie, Jan; Shapira, Philip; Schoeneck, David J.

    2008-05-01

    The ability to delineate the boundaries of an emerging technology is central to obtaining an understanding of the technology's research paths and commercialization prospects. Nowhere is this more relevant than in the case of nanotechnology (hereafter identified as "nano") given its current rapid growth and multidisciplinary nature. (Under the rubric of nanotechnology, we also include nanoscience and nanoengineering.) Past efforts have utilized several strategies, including simple term search for the prefix nano, complex lexical and citation-based approaches, and bootstrapping techniques. This research introduces a modularized Boolean approach to defining nanotechnology which has been applied to several research and patenting databases. We explain our approach to downloading and cleaning data, and report initial results. Comparisons of this approach with other nanotechnology search formulations are presented. Implications for search strategy development and profiling of the nanotechnology field are discussed.

  11. Mixture experiment methods in the development and optimization of microemulsion formulations.

    PubMed

    Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P

    2011-06-25

    Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Building more realistic reservoir optimization models using data mining - A case study of Shelbyville Reservoir

    NASA Astrophysics Data System (ADS)

    Hejazi, Mohamad I.; Cai, Ximing

    2011-06-01

    In this paper, we promote a novel approach to develop reservoir operation routines by learning from historical hydrologic information and reservoir operations. The proposed framework involves a knowledge discovery step to learn the real drivers of reservoir decision making and to subsequently build a more realistic (enhanced) model formulation using stochastic dynamic programming (SDP). The enhanced SDP model is compared to two classic SDP formulations using Lake Shelbyville, a reservoir on the Kaskaskia River in Illinois, as a case study. From a data mining procedure with monthly data, the past month's inflow ( Qt-1 ), current month's inflow ( Qt), past month's release ( Rt-1 ), and past month's Palmer drought severity index ( PDSIt-1 ) are identified as important state variables in the enhanced SDP model for Shelbyville Reservoir. When compared to a weekly enhanced SDP model of the same case study, a different set of state variables and constraints are extracted. Thus different time scales for the model require different information. We demonstrate that adding additional state variables improves the solution by shifting the Pareto front as expected while using new constraints and the correct objective function can significantly reduce the difference between derived policies and historical practices. The study indicates that the monthly enhanced SDP model resembles historical records more closely and yet provides lower expected average annual costs than either of the two classic formulations (25.4% and 4.5% reductions, respectively). The weekly enhanced SDP model is compared to the monthly enhanced SDP, and it shows that acquiring the correct temporal scale is crucial to model reservoir operation for particular objectives.

  13. Acute liver injury associated with a newer formulation of the herbal weight loss supplement Hydroxycut.

    PubMed

    Araujo, James L; Worman, Howard J

    2015-05-06

    Despite the widespread use of herbal and dietary supplements (HDS), serious cases of hepatotoxicity have been reported. The popular herbal weight loss supplement, Hydroxycut, has previously been implicated in acute liver injury. Since its introduction, Hydroxycut has undergone successive transformations in its formulation; yet, cases of liver injury have remained an ongoing problem. We report a case of a 41-year-old Hispanic man who developed acute hepatocellular liver injury with associated nausea, vomiting, jaundice, fatigue and asterixis attributed to the use of a newer formulation of Hydroxycut, SX-7 Clean Sensory. The patient required hospitalisation and improved with supportive therapy. Despite successive transformations in its formulation, potential liver injury appears to remain an ongoing problem with Hydroxycut. Our case illustrates the importance of obtaining a thorough medication history, including HDS, regardless of new or reformulated product marketing efforts. 2015 BMJ Publishing Group Ltd.

  14. Acute liver injury associated with a newer formulation of the herbal weight loss supplement Hydroxycut

    PubMed Central

    Worman, Howard J

    2015-01-01

    Despite the widespread use of herbal and dietary supplements (HDS), serious cases of hepatotoxicity have been reported. The popular herbal weight loss supplement, Hydroxycut, has previously been implicated in acute liver injury. Since its introduction, Hydroxycut has undergone successive transformations in its formulation; yet, cases of liver injury have remained an ongoing problem. We report a case of a 41-year-old Hispanic man who developed acute hepatocellular liver injury with associated nausea, vomiting, jaundice, fatigue and asterixis attributed to the use of a newer formulation of Hydroxycut, SX-7 Clean Sensory. The patient required hospitalisation and improved with supportive therapy. Despite successive transformations in its formulation, potential liver injury appears to remain an ongoing problem with Hydroxycut. Our case illustrates the importance of obtaining a thorough medication history, including HDS, regardless of new or reformulated product marketing efforts. PMID:25948859

  15. On physical optics for calculating scattering from coated bodies

    NASA Technical Reports Server (NTRS)

    Baldauf, J.; Lee, S. W.; Ling, H.; Chou, R.

    1989-01-01

    The familiar physical optics (PO) approximation is no longer valid when the perfectly conducting scatterer is coated with dielectric material. This paper reviews several possible PO formulations. By comparing the PO formulation with the moment method solution based on the impedance boundary condition for the case of the coated cone-sphere, a PO formulation using both electric and magnetic currents consistently gives the best numerical results. Comparisons of the exact moment method with the PO formulations using the impedance boundary condition and the PO formulation using the Fresnel reflection coefficient for the case of scattering from the cone-ellipsoid demonstrate that the Fresnel reflection coefficient gives the best numerical results in general.

  16. New Approach for the Development of Improved Traditional Medicine: Case of a Preparation of an Oral Hypoglycemic Medicine from Laportea ovalifolia (Schumach. & Thonn.) Chew. (Urticaceae).

    PubMed

    Tsabang, Nolé; Kadjob, Stella; Mballa, Rose N; Yedjou, Clement G; Nnanga, Nga; Donfagsiteli, Néhémie T; Tchinda, Alembert T; Agbor, Gabriel A; Ntsama, Claudine; Tchounwou, Paul B

    2015-08-01

    A majority of Africans rely on traditional medicine as the primary form of health care. Yet most traditional medicine products have a short shelf life, especially for water-based formulations such as macerations, infusions and decoctions. Indeed, many of these water extracts become unfit for human consumption after five to seven days of conservation either because of the degradation or toxicity of active components, and/or the growth of pathogenic organisms. The purpose of this study was to describe and apply a new approach for the development of an improved traditional medicine (ITM) that is cheap, very efficient, not toxic, and easy to produce, and that can be conserved for a longer time without a significant loss of activity. Hence, Laportea ovalifolia was selected from an ethnobotanical prospection in all regions of Cameroon, and was used to prepare an oral hypoglycemic product. This preparation required 9 steps focused on the characterization of the plant species, and the standardization of the ethnopharmacological preparation by a multidisciplinary team of scientists with expertise in botany, ecology, pharmacognosy and pharmacology. Resultantly, four galenic formulations of hypoglycemic medications were produced. A relationship between these four formulations was described as follow: One spoon of oral suspension (10 ml)=one sachet of powder=2 tablets=3 capsules. Hence, our research provides new insight into a drug discovery approach that could alleviate the major problems affecting traditional medicine and enhance its effectiveness in addressing health care in developing and undeveloped countries.

  17. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  18. Visualizing complex processes using a cognitive-mapping tool to support the learning of clinical reasoning.

    PubMed

    Wu, Bian; Wang, Minhong; Grotzer, Tina A; Liu, Jun; Johnson, Janice M

    2016-08-22

    Practical experience with clinical cases has played an important role in supporting the learning of clinical reasoning. However, learning through practical experience involves complex processes difficult to be captured by students. This study aimed to examine the effects of a computer-based cognitive-mapping approach that helps students to externalize the reasoning process and the knowledge underlying the reasoning process when they work with clinical cases. A comparison between the cognitive-mapping approach and the verbal-text approach was made by analyzing their effects on learning outcomes. Fifty-two third-year or higher students from two medical schools participated in the study. Students in the experimental group used the computer-base cognitive-mapping approach, while the control group used the verbal-text approach, to make sense of their thinking and actions when they worked with four simulated cases over 4 weeks. For each case, students in both groups reported their reasoning process (involving data capture, hypotheses formulation, and reasoning with justifications) and the underlying knowledge (involving identified concepts and the relationships between the concepts) using the given approach. The learning products (cognitive maps or verbal text) revealed that students in the cognitive-mapping group outperformed those in the verbal-text group in the reasoning process, but not in making sense of the knowledge underlying the reasoning process. No significant differences were found in a knowledge posttest between the two groups. The computer-based cognitive-mapping approach has shown a promising advantage over the verbal-text approach in improving students' reasoning performance. Further studies are needed to examine the effects of the cognitive-mapping approach in improving the construction of subject-matter knowledge on the basis of practical experience.

  19. Evaluating exposure and potential effects on honeybee brood (Apis mellifera) development using glyphosate as an example

    PubMed Central

    Thompson, Helen M; Levine, Steven L; Doering, Janine; Norman, Steve; Manson, Philip; Sutton, Peter; von Mérey, Georg

    2014-01-01

    This study aimed to develop an approach to evaluate potential effects of plant protection products on honeybee brood with colonies at realistic worst-case exposure rates. The approach comprised 2 stages. In the first stage, honeybee colonies were exposed to a commercial formulation of glyphosate applied to flowering Phacelia tanacetifolia with glyphosate residues quantified in relevant matrices (pollen and nectar) collected by foraging bees on days 1, 2, 3, 4, and 7 postapplication and glyphosate levels in larvae were measured on days 4 and 7. Glyphosate levels in pollen were approximately 10 times higher than in nectar and glyphosate demonstrated rapid decline in both matrices. Residue data along with foraging rates and food requirements of the colony were then used to set dose rates in the effects study. In the second stage, the toxicity of technical glyphosate to developing honeybee larvae and pupae, and residues in larvae, were then determined by feeding treated sucrose directly to honeybee colonies at dose rates that reflect worst-case exposure scenarios. There were no significant effects from glyphosate observed in brood survival, development, and mean pupal weight. Additionally, there were no biologically significant levels of adult mortality observed in any glyphosate treatment group. Significant effects were observed only in the fenoxycarb toxic reference group and included increased brood mortality and a decline in the numbers of bees and brood. Mean glyphosate residues in larvae were comparable at 4 days after spray application in the exposure study and also following dosing at a level calculated from the mean measured levels in pollen and nectar, showing the applicability and robustness of the approach for dose setting with honeybee brood studies. This study has developed a versatile and predictive approach for use in higher tier honeybee toxicity studies. It can be used to realistically quantify exposure of colonies to pesticides to allow the appropriate dose rates to be determined, based on realistic worst-case residues in pollen and nectar and estimated intake by the colony, as shown by the residue analysis. Previous studies have used the standard methodology developed primarily to identify pesticides with insect-growth disrupting properties of pesticide formulations, which are less reliant on identifying realistic exposure scenarios. However, this adaptation of the method can be used to determine dose–response effects of colony level exposure to pesticides with a wide range of properties. This approach would limit the number of replicated tunnel or field-scale studies that need to be undertaken to assess effects on honeybee brood and may be of particular benefit where residues in pollen and nectar are crop- and/or formulation-specific, such as systemic seed treatments and granular applications. Integr Environ Assess Manag 2014;10:463–470. PMID:24616275

  20. Evaluating exposure and potential effects on honeybee brood (Apis mellifera) development using glyphosate as an example.

    PubMed

    Thompson, Helen M; Levine, Steven L; Doering, Janine; Norman, Steve; Manson, Philip; Sutton, Peter; von Mérey, Georg

    2014-07-01

    This study aimed to develop an approach to evaluate potential effects of plant protection products on honeybee brood with colonies at realistic worst-case exposure rates. The approach comprised 2 stages. In the first stage, honeybee colonies were exposed to a commercial formulation of glyphosate applied to flowering Phacelia tanacetifolia with glyphosate residues quantified in relevant matrices (pollen and nectar) collected by foraging bees on days 1, 2, 3, 4, and 7 postapplication and glyphosate levels in larvae were measured on days 4 and 7. Glyphosate levels in pollen were approximately 10 times higher than in nectar and glyphosate demonstrated rapid decline in both matrices. Residue data along with foraging rates and food requirements of the colony were then used to set dose rates in the effects study. In the second stage, the toxicity of technical glyphosate to developing honeybee larvae and pupae, and residues in larvae, were then determined by feeding treated sucrose directly to honeybee colonies at dose rates that reflect worst-case exposure scenarios. There were no significant effects from glyphosate observed in brood survival, development, and mean pupal weight. Additionally, there were no biologically significant levels of adult mortality observed in any glyphosate treatment group. Significant effects were observed only in the fenoxycarb toxic reference group and included increased brood mortality and a decline in the numbers of bees and brood. Mean glyphosate residues in larvae were comparable at 4 days after spray application in the exposure study and also following dosing at a level calculated from the mean measured levels in pollen and nectar, showing the applicability and robustness of the approach for dose setting with honeybee brood studies. This study has developed a versatile and predictive approach for use in higher tier honeybee toxicity studies. It can be used to realistically quantify exposure of colonies to pesticides to allow the appropriate dose rates to be determined, based on realistic worst-case residues in pollen and nectar and estimated intake by the colony, as shown by the residue analysis. Previous studies have used the standard methodology developed primarily to identify pesticides with insect-growth disrupting properties of pesticide formulations, which are less reliant on identifying realistic exposure scenarios. However, this adaptation of the method can be used to determine dose-response effects of colony level exposure to pesticides with a wide range of properties. This approach would limit the number of replicated tunnel or field-scale studies that need to be undertaken to assess effects on honeybee brood and may be of particular benefit where residues in pollen and nectar are crop- and/or formulation-specific, such as systemic seed treatments and granular applications. © 2014 The Authors. Integrated Environmental Assessment and Management Published by SETAC.

  1. Quality by design in formulation and process development for a freeze-dried, small molecule parenteral product: a case study.

    PubMed

    Mockus, Linas N; Paul, Timothy W; Pease, Nathan A; Harper, Nancy J; Basu, Prabir K; Oslos, Elizabeth A; Sacha, Gregory A; Kuu, Wei Y; Hardwick, Lisa M; Karty, Jacquelyn J; Pikal, Michael J; Hee, Eun; Khan, Mansoor A; Nail, Steven L

    2011-01-01

    A case study has been developed to illustrate one way of incorporating a Quality by Design approach into formulation and process development for a small molecule, freeze-dried parenteral product. Sodium ethacrynate was chosen as the model compound. Principal degradation products of sodium ethacrynate result from hydrolysis of the unsaturated ketone in aqueous solution, and dimer formation from a Diels-Alder condensation in the freeze-dried solid state. When the drug crystallizes in a frozen solution, the eutectic melting temperature is above -5°C. Crystallization in the frozen system is affected by pH in the range of pH 6-8 and buffer concentration in the range of 5-50 mM, where higher pH and lower buffer concentration favor crystallization. Physical state of the drug is critical to solid state stability, given the relative instability of amorphous drug. Stability was shown to vary considerably over the ranges of pH and buffer concentration examined, and vial-to-vial variability in degree of crystallinity is a potential concern. The formulation design space was constructed in terms of pH and drug concentration, and assuming a constant 5 mM concentration of buffer. The process design space is constructed to take into account limitations on the process imposed by the product and by equipment capability.

  2. Abuse-deterrent formulations: part 1 - development of a formulation-based classification system.

    PubMed

    Mastropietro, David J; Omidian, Hossein

    2015-02-01

    Strategies have been implemented to decrease the large proportion of individuals misusing abusable prescription medications. Abuse-deterrent formulations (ADFs) have been grown to incorporate many different technologies that still lack a systematic naming and organizational nomenclature. Without a proper classification system, it has been challenging to properly identify ADFs, study and determine common traits or characteristics and simplify communication within the field. This article introduces a classification system for all ADF approaches and examines the physical, chemical and pharmacological characteristics of a formulation by placing them into primary, secondary and tertiary categories. Primary approaches block tampering done directly to the product. Secondary approaches work in vivo after the product is administered. Tertiary approaches use materials that discourage abuse but do not stop tampering. Part 2 of this article discusses proprietary technologies, patents and products utilizing primary approaches. Drug products using opioid antagonists and aversive agents have been seen over the past few decades to discourage primarily overuse and injection. However, innovation in formulation development has introduced products capable of deterring multiple forms of tampering and abuse. Often, this is accomplished using known excipients and manufacturing methods that are repurposed to prevent crushing, extraction and syringeability.

  3. Implementing participatory decision making in forest planning.

    PubMed

    Ananda, Jayanath

    2007-04-01

    Forest policy decisions are often a source of debate, conflict, and tension in many countries. The debate over forest land-use decisions often hinges on disagreements about societal values related to forest resource use. Disagreements on social value positions are fought out repeatedly at local, regional, national, and international levels at an enormous social cost. Forest policy problems have some inherent characteristics that make them more difficult to deal with. On the one hand, forest policy decisions involve uncertainty, long time scales, and complex natural systems and processes. On the other hand, such decisions encompass social, political, and cultural systems that are evolving in response to forces such as globalization. Until recently, forest policy was heavily influenced by the scientific community and various economic models of optimal resource use. However, growing environmental awareness and acceptance of participatory democracy models in policy formulation have forced the public authorities to introduce new participatory mechanisms to manage forest resources. Most often, the efforts to include the public in policy formulation can be described using the lower rungs of Arnstein's public participation typology. This paper presents an approach that incorporates stakeholder preferences into forest land-use policy using the Analytic Hierarchy Process (AHP). An illustrative case of regional forest-policy formulation in Australia is used to demonstrate the approach. It is contended that applying the AHP in the policy process could considerably enhance the transparency of participatory process and public acceptance of policy decisions.

  4. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  5. A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems

    PubMed Central

    Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang

    2016-01-01

    With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach. PMID:26999141

  6. A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems.

    PubMed

    Huang, Jiwei; Zhu, Yeping; Cheng, Bo; Lin, Chuang; Chen, Junliang

    2016-03-17

    With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach.

  7. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  8. Formulation approaches to pediatric oral drug delivery: benefits and limitations of current platforms

    PubMed Central

    Lopez, Felipe L; Ernest, Terry B; Tuleu, Catherine; Gul, Mine Orlu

    2015-01-01

    Introduction: Most conventional drug delivery systems are not acceptable for pediatric patients as they differ in their developmental status and dosing requirements from other subsets of the population. Technology platforms are required to aid the development of age-appropriate medicines to maximize patient acceptability while maintaining safety, efficacy, accessibility and affordability. Areas covered: The current approaches and novel developments in the field of age-appropriate drug delivery for pediatric patients are critically discussed including patient-centric formulations, administration devices and packaging systems. Expert opinion: Despite the incentives provided by recent regulatory modifications and the efforts of formulation scientists, there is still a need for implementation of pharmaceutical technologies that enable the manufacture of licensed age-appropriate formulations. Harmonization of endeavors from regulators, industry and academia by sharing learning associated with data obtained from pediatric investigation plans, product development pathways and scientific projects would be the way forward to speed up bench-to-market age appropriate formulation development. A collaborative approach will benefit not only pediatrics, but other patient populations such as geriatrics would also benefit from an accelerated patient-centric approach to drug delivery. PMID:26165848

  9. A comprehensive linear programming tool to optimize formulations of ready-to-use therapeutic foods: an application to Ethiopia.

    PubMed

    Ryan, Kelsey N; Adams, Katherine P; Vosti, Stephen A; Ordiz, M Isabel; Cimo, Elizabeth D; Manary, Mark J

    2014-12-01

    Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed international and national crop and animal food databases was used to create a global and local candidate ingredient database. The database included information about each ingredient regarding nutrient composition, ingredient category, regional availability, and food safety, processing, and price. An LP tool was then designed to compose novel RUTF formulations. For the example case of Ethiopia, the objective was to minimize the ingredient cost of RUTF; the decision variables were ingredient weights and the extent of use of locally available ingredients, and the constraints were nutritional and product-quality related. Of the new RUTF formulations found by the LP tool for Ethiopia, 32 were predicted to be feasible for creating a paste, and these were prepared in the laboratory. Palatable final formulations contained a variety of ingredients, including fish, different dairy powders, and various seeds, grains, and legumes. Nearly all of the macronutrient values calculated by the LP tool differed by <10% from results produced by laboratory analyses, but the LP tool consistently underestimated total energy. The LP tool can be used to develop new RUTF formulations that make more use of locally available ingredients. This tool has the potential to lead to production of a variety of low-cost RUTF formulations that meet international standards and thereby potentially allow more children to be treated for SAM. © 2014 American Society for Nutrition.

  10. A General Formulation for Robust and Efficient Integration of Finite Differences and Phase Unwrapping on Sparse Multidimensional Domains

    NASA Astrophysics Data System (ADS)

    Costantini, Mario; Malvarosa, Fabio; Minati, Federico

    2010-03-01

    Phase unwrapping and integration of finite differences are key problems in several technical fields. In SAR interferometry and differential and persistent scatterers interferometry digital elevation models and displacement measurements can be obtained after unambiguously determining the phase values and reconstructing the mean velocities and elevations of the observed targets, which can be performed by integrating differential estimates of these quantities (finite differences between neighboring points).In this paper we propose a general formulation for robust and efficient integration of finite differences and phase unwrapping, which includes standard techniques methods as sub-cases. The proposed approach allows obtaining more reliable and accurate solutions by exploiting redundant differential estimates (not only between nearest neighboring points) and multi-dimensional information (e.g. multi-temporal, multi-frequency, multi-baseline observations), or external data (e.g. GPS measurements). The proposed approach requires the solution of linear or quadratic programming problems, for which computationally efficient algorithms exist.The validation tests obtained on real SAR data confirm the validity of the method, which was integrated in our production chain and successfully used also in massive productions.

  11. A continuum mechanics constitutive framework for transverse isotropic soft tissues

    NASA Astrophysics Data System (ADS)

    Garcia-Gonzalez, D.; Jérusalem, A.; Garzon-Hernandez, S.; Zaera, R.; Arias, A.

    2018-03-01

    In this work, a continuum constitutive framework for the mechanical modelling of soft tissues that incorporates strain rate and temperature dependencies as well as the transverse isotropy arising from fibres embedded into a soft matrix is developed. The constitutive formulation is based on a Helmholtz free energy function decoupled into the contribution of a viscous-hyperelastic matrix and the contribution of fibres introducing dispersion dependent transverse isotropy. The proposed framework considers finite deformation kinematics, is thermodynamically consistent and allows for the particularisation of the energy potentials and flow equations of each constitutive branch. In this regard, the approach developed herein provides the basis on which specific constitutive models can be potentially formulated for a wide variety of soft tissues. To illustrate this versatility, the constitutive framework is particularised here for animal and human white matter and skin, for which constitutive models are provided. In both cases, different energy functions are considered: Neo-Hookean, Gent and Ogden. Finally, the ability of the approach at capturing the experimental behaviour of the two soft tissues is confirmed.

  12. Numerical solution of a conspicuous consumption model with constant control delay☆

    PubMed Central

    Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea

    2011-01-01

    We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871

  13. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  14. Passive vibration control: a structure–immittance approach

    PubMed Central

    Zhang, Sara Ying; Neild, Simon A.

    2017-01-01

    Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure–immittance approach. Using this approach, a full set of possible series–parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively. PMID:28588407

  15. Passive vibration control: a structure-immittance approach.

    PubMed

    Zhang, Sara Ying; Jiang, Jason Zheng; Neild, Simon A

    2017-05-01

    Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure-immittance approach. Using this approach, a full set of possible series-parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively.

  16. Passive vibration control: a structure-immittance approach

    NASA Astrophysics Data System (ADS)

    Zhang, Sara Ying; Jiang, Jason Zheng; Neild, Simon A.

    2017-05-01

    Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure-immittance approach. Using this approach, a full set of possible series-parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively.

  17. Glass Property Models and Constraints for Estimating the Glass to be Produced at Hanford by Implementing Current Advanced Glass Formulation Efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.

    2013-07-01

    Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less

  18. On a viable first-order formulation of relativistic viscous fluids and its applications to cosmology

    NASA Astrophysics Data System (ADS)

    Disconzi, Marcelo M.; Kephart, Thomas W.; Scherrer, Robert J.

    We consider a first-order formulation of relativistic fluids with bulk viscosity based on a stress-energy tensor introduced by Lichnerowicz. Choosing a barotropic equation-of-state, we show that this theory satisfies basic physical requirements and, under the further assumption of vanishing vorticity, that the equations of motion are causal, both in the case of a fixed background and when the equations are coupled to Einstein's equations. Furthermore, Lichnerowicz's proposal does not fit into the general framework of first-order theories studied by Hiscock and Lindblom, and hence their instability results do not apply. These conclusions apply to the full-fledged nonlinear theory, without any equilibrium or near equilibrium assumptions. Similarities and differences between the approach explored here and other theories of relativistic viscosity, including the Mueller-Israel-Stewart formulation, are addressed. Cosmological models based on the Lichnerowicz stress-energy tensor are studied. As the topic of (relativistic) viscous fluids is also of interest outside the general relativity and cosmology communities, such as, for instance, in applications involving heavy-ion collisions, we make our presentation largely self-contained.

  19. Correlation between Gini index and mobility in a stochastic kinetic model of economic exchange

    NASA Astrophysics Data System (ADS)

    Bertotti, Maria Letizia; Chattopadhyay, Amit K.; Modanese, Giovanni

    Starting from a class of stochastically driven kinetic models of economic exchange, here we present results highlighting the correlation of the Gini inequality index with the social mobility rate, close to dynamical equilibrium. Except for the "canonical-additive case", our numerical results consistently indicate negative values of the correlation coefficient, in agreement with empirical evidence. This confirms that growing inequality is not conducive to social mobility which then requires an "external source" to sustain its dynamics. On the other hand, the sign of the correlation between inequality and total income in the canonical ensemble depends on the way wealth enters or leaves the system. At a technical level, the approach involves a generalization of a stochastic dynamical system formulation, that further paves the way for a probabilistic formulation of perturbed economic exchange models.

  20. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, Gary F.; Banerjee, Prasanta K.; Dunn, Michael G.

    1988-01-01

    Significant progress was made toward the goal of developing a general purpose boundary element method for hot fluid-structure interaction. For the solid phase, a boundary-only formulation was developed and implemented for uncoupled transient thermoelasticity in two dimensions. The elimination of volume discretization not only drastically reduces required modeling effort, but also permits unconstrained variation of the through-the-thickness temperature distribution. Meanwhile, for the fluids, fundamental solutions were derived for transient incompressible and compressible flow in the absence of the convective terms. Boundary element formulations were developed and described. For the incompressible case, the necessary kernal functions, under transient and steady-state conditions, were derived and fully implemented into a general purpose, multi-region boundary element code. Several examples were examined to study the suitability and convergence characteristics of the various algorithms.

  1. Transient analysis of a thermal storage unit involving a phase change material

    NASA Technical Reports Server (NTRS)

    Griggs, E. I.; Pitts, D. R.; Humphries, W. R.

    1974-01-01

    The transient response of a single cell of a typical phase change material type thermal capacitor has been modeled using numerical conductive heat transfer techniques. The cell consists of a base plate, an insulated top, and two vertical walls (fins) forming a two-dimensional cavity filled with a phase change material. Both explicit and implicit numerical formulations are outlined. A mixed explicit-implicit scheme which treats the fin implicity while treating the phase change material explicitly is discussed. A band algorithmic scheme is used to reduce computer storage requirements for the implicit approach while retaining a relatively fine grid. All formulations are presented in dimensionless form thereby enabling application to geometrically similar problems. Typical parametric results are graphically presented for the case of melting with constant heat input to the base of the cell.

  2. Diagnosis and hypnotic treatment of an unusual case of hysterical amnesia.

    PubMed

    Iglesias, Alex; Iglesias, Adam

    2009-10-01

    This article reports on the use of hypnosis to facilitate the diagnostic process and the treatment of an unusual case of adult psychogenic amnesia. An Iraqi citizen living in the U.S. developed an atypical case of Dissociative Amnesia, Systematized type, post-automotive collision. The amnesia presented with features encompassing complete loss of the patient's native language. Dissociation theory as a conceptualization of hysterical reactions was employed as the basis in the formulation of this case. The differential diagnosis was facilitated by the Hypnotic Diagnostic Interview for Hysterical Disorders (HDIHD) Adult Form, an interview tool specifically designed for cases such as this. Treatment consisted exclusively of ego strengthening and time projection approaches in hypnosis. It was hypothesized that, as the coping capacities became more viable, the dissociative symptoms would remiss. After 6 weekly visits the patient regained complete command of his native language. Follow-up at 6 months indicated that the patient remained devoid of symptoms.

  3. Development and application of a biorelevant dissolution method using USP apparatus 4 in early phase formulation development.

    PubMed

    Fang, Jiang B; Robertson, Vivian K; Rawat, Archana; Flick, Tawnya; Tang, Zhe J; Cauchon, Nina S; McElvain, James S

    2010-10-04

    Dissolution testing is frequently used to determine the rate and extent at which a drug is released from a dosage form, and it plays many important roles throughout drug product development. However, the traditional dissolution approach often emphasizes its application in quality control testing and usually strives to obtain 100% drug release. As a result, dissolution methods are not necessarily biorelevant and meaningful application of traditional dissolution methods in the early phases of drug product development can be very limited. This article will describe the development of a biorelevant in vitro dissolution method using USP apparatus 4, biorelevant media, and real-time online UV analysis. Several case studies in the areas of formulation selection, lot-to-lot variability, and food effect will be presented to demonstrate the application of this method in early phase formulation development. This biorelevant dissolution method using USP apparatus 4 provides a valuable tool to predict certain aspects of the in vivo drug release. It can be used to facilitate the formulation development/selection for pharmacokinetic (PK) and clinical studies. It may also potentially be used to minimize the number of PK studies, and to aid in the design of more efficient PK and clinical studies.

  4. Screening vaccine formulations for biological activity using fresh human whole blood

    PubMed Central

    Brookes, Roger H; Hakimi, Jalil; Ha, Yukyung; Aboutorabian, Sepideh; Ausar, Salvador F; Hasija, Manvi; Smith, Steven G; Todryk, Stephen M; Dockrell, Hazel M; Rahman, Nausheen

    2014-01-01

    Understanding the relevant biological activity of any pharmaceutical formulation destined for human use is crucial. For vaccine-based formulations, activity must reflect the expected immune response, while for non-vaccine therapeutic agents, such as monoclonal antibodies, a lack of immune response to the formulation is desired. During early formulation development, various biochemical and biophysical characteristics can be monitored in a high-throughput screening (HTS) format. However, it remains impractical and arguably unethical to screen samples in this way for immunological functionality in animal models. Furthermore, data for immunological functionality lag formulation design by months, making it cumbersome to relate back to formulations in real-time. It is also likely that animal testing may not accurately reflect the response in humans. For a more effective formulation screen, a human whole blood (hWB) approach can be used to assess immunological functionality. The functional activity relates directly to the human immune response to a complete formulation (adjuvant/antigen) and includes adjuvant response, antigen response, adjuvant-modulated antigen response, stability, and potentially safety. The following commentary discusses the hWB approach as a valuable new tool to de-risk manufacture, formulation design, and clinical progression. PMID:24401565

  5. Screening vaccine formulations for biological activity using fresh human whole blood.

    PubMed

    Brookes, Roger H; Hakimi, Jalil; Ha, Yukyung; Aboutorabian, Sepideh; Ausar, Salvador F; Hasija, Manvi; Smith, Steven G; Todryk, Stephen M; Dockrell, Hazel M; Rahman, Nausheen

    2014-01-01

    Understanding the relevant biological activity of any pharmaceutical formulation destined for human use is crucial. For vaccine-based formulations, activity must reflect the expected immune response, while for non-vaccine therapeutic agents, such as monoclonal antibodies, a lack of immune response to the formulation is desired. During early formulation development, various biochemical and biophysical characteristics can be monitored in a high-throughput screening (HTS) format. However, it remains impractical and arguably unethical to screen samples in this way for immunological functionality in animal models. Furthermore, data for immunological functionality lag formulation design by months, making it cumbersome to relate back to formulations in real-time. It is also likely that animal testing may not accurately reflect the response in humans. For a more effective formulation screen, a human whole blood (hWB) approach can be used to assess immunological functionality. The functional activity relates directly to the human immune response to a complete formulation (adjuvant/antigen) and includes adjuvant response, antigen response, adjuvant-modulated antigen response, stability, and potentially safety. The following commentary discusses the hWB approach as a valuable new tool to de-risk manufacture, formulation design, and clinical progression.

  6. Tiered application of the neutral red release and EpiOcular™ assays for evaluating the eye irritation potential of agrochemical formulations.

    PubMed

    Settivari, Raja S; Amado, Ricardo Acosta; Corvaro, Marco; Visconti, Nicolo R; Kan, Lynn; Carney, Edward W; Boverhof, Darrell R; Gehen, Sean C

    2016-11-01

    Agrochemical formulations have been underrepresented in validation efforts for implementing alternative eye irritation approaches but represent a significant opportunity to reduce animal testing. This study assesses the utility of the neutral red release assay (NRR) and EpiOcular™ assay (EO) for predicting the eye irritation potential of 64 agrochemical formulations relative to Draize data. In the NRR, formulations with an NRR50 value ≤ 50 mg/mL were categorized as UN GHS Cat 1 and those >250 mg/mL were classified as UN GHS Non Classified (NC). The accuracy, sensitivity, and specificity were 78, 85 and 76% and 73, 85 and 61% for identifying UN GHS 1 and NC formulations, respectively. Specificity was poor for formulations with NRR50 > 50 to ≤250 mg/mL. The EO (ET-40 method) was explored to differentiate formulations that were UN GHS 1/2 and UN GHS NC. The EO resulted in accuracy, sensitivity, and specificity of 65%, 58% and 75% for identifying UN GHS NC formulations. To improve the overall performance, the assays were implemented using a tiered-approach where the NRR was run as a first-tier followed by the EO. The tiered-approach resulted in improved accuracy (75%) and balanced sensitivity (73%) and specificity (77%) for distinguishing between irritating and non-irritating agrochemical formulations. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  8. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  9. A dual potential formulation of the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Gegg, S. G.; Pletcher, R. H.; Steger, J. L.

    1989-01-01

    A dual potential formulation for numerically solving the Navier-Stokes equations is developed and presented. The velocity field is decomposed using a scalar and vector potential. Vorticity and dilatation are used as the dependent variables in the momentum equations. Test cases in two dimensions verify the capability to solve flows using approximations from potential flow to full Navier-Stokes simulations. A three-dimensional incompressible flow formulation is also described. An interesting feature of this approach to solving the Navier-Stokes equations is the decomposition of the velocity field into a rotational part (vector potential) and an irrotational part (scalar potential). The Helmholtz decomposition theorem allows this splitting of the velocity field. This approach has had only limited use since it increases the number of dependent variables in the solution. However, it has often been used for incompressible flows where the solution scheme is known to be fast and accurate. This research extends the usage of this method to fully compressible Navier-Stokes simulations by using the dilatation variable along with vorticity. A time-accurate, iterative algorithm is used for the uncoupled solution of the governing equations. Several levels of flow approximation are available within the framework of this method. Potential flow, Euler and full Navier-Stokes solutions are possible using the dual potential formulation. Solution efficiency can be enhanced in a straightforward way. For some flows, the vorticity and/or dilatation may be negligible in certain regions (e.g., far from a viscous boundary in an external flow). It is possible to drop the calculation of these variables then and optimize the solution speed. Also, efficient Poisson solvers are available for the potentials. The relative merits of non-primitive variables versus primitive variables for solution of the Navier-Stokes equations are also discussed.

  10. Physical stabilization of low-molecular-weight amorphous drugs in the solid state: a material science approach.

    PubMed

    Qi, Sheng; McAuley, William J; Yang, Ziyi; Tipduangta, Pratchaya

    2014-07-01

    Use of the amorphous state is considered to be one of the most effective approaches for improving the dissolution and subsequent oral bioavailability of poorly water-soluble drugs. However as the amorphous state has much higher physical instability in comparison with its crystalline counterpart, stabilization of amorphous drugs in a solid-dosage form presents a major challenge to formulators. The currently used approaches for stabilizing amorphous drug are discussed in this article with respect to their preparation, mechanism of stabilization and limitations. In order to realize the potential of amorphous formulations, significant efforts are required to enable the prediction of formulation performance. This will facilitate the development of computational tools that can inform a rapid and rational formulation development process for amorphous drugs.

  11. Determination of the temperature field of shell structures

    NASA Astrophysics Data System (ADS)

    Rodionov, N. G.

    1986-10-01

    A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharif, M., E-mail: msharif.math@pu.edu.pk; Nawazish, I., E-mail: iqranawazish07@gmail.com

    We attempt to find exact solutions of the Bianchi I model in f(R) gravity using the Noether symmetry approach. For this purpose, we take a perfect fluid and formulate conserved quantities for the power-law f(R) model. We discuss some cosmological parameters for the resulting solution which are responsible for expanding behavior of the universe. We also explore Noether gauge symmetry and the corresponding conserved quantity. It is concluded that symmetry generators as well as conserved quantities exist in all cases and the behavior of cosmological parameters shows consistency with recent observational data.

  13. Surface electrical properties experiment. Part 2: Theory of radio-frequency interferometry in geophysical subsurface probing

    NASA Technical Reports Server (NTRS)

    Kong, J. A.; Tsang, L.

    1974-01-01

    The radiation fields due to a horizontal electric dipole laid on the surface of a stratified medium were calculated using a geometrical optics approximation, a modal approach, and direct numerical integration. The solutions were obtained from the reflection coefficient formulation and written in integral forms. The calculated interference patterns are compared in terms of the usefulness of the methods used to obtain them. Scattering effects are also discussed and all numerical results for anisotropic and isotropic cases are presented.

  14. Vakonomic Constraints in Higher-Order Classical Field Theory

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.

    2010-07-01

    We propose a differential-geometric setting for the dynamics of a higher-order field theory, based on the Skinner and Rusk formalism for mechanics. This approach incorporates aspects of both, the Lagrangian and the Hamiltonian description, since the field equations are formulated using the Lagrangian on a higher-order jet bundle and the canonical multisymplectic form on its affine dual. The result is that we obtain a unique and global intrinsic description of the dynamics. The case of vakonomic constraints is also studied within this formalism.

  15. Nutrition meets heredity: a case of RNA-mediated transmission of acquired characters.

    PubMed

    Rassoulzadegan, Minoo; Cuzin, François

    2018-04-01

    RNA-based inheritance provides a reasonable hypothesis to explain multigenerational maintenance of the disease in the progeny of either a male or female parent suffering from the metabolic syndrome (obesity and type 2 diabetes) induced by abnormal diet. Although, it is still difficult to formulate a complete rational mechanism, study of inheritance is a most direct way to learn about the epigenetic control of gene expression and we wished to summarised our current approach along this line.

  16. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  17. Defining level A IVIVC dissolution specifications based on individual in vitro dissolution profiles of a controlled release formulation.

    PubMed

    González-García, I; García-Arieta, A; Merino-Sanjuan, M; Mangas-Sanjuan, V; Bermejo, M

    2018-07-01

    Regulatory guidelines recommend that, when a level A IVIVC is established, dissolution specification should be established using averaged data and the maximum difference between AUC and C max between the reference and test formulations cannot be greater than 20%. However, averaging data assumes a loss of information and may reflect a bias in the results. The objective of the current work is to present a new approach to establish dissolution specifications using a new methodology (individual approach) instead of average data (classical approach). Different scenarios were established based on the relationship between in vitro-in vivo dissolution rate coefficient using a level A IVIVC of a controlled release formulation. Then, in order to compare this new approach with the classical one, six additional batches were simulated. For each batch, 1000 simulations of a dissolution assay were run. C max ratios between the reference formulation and each batch were calculated showing that the individual approach was more sensitive and able to detect differences between the reference and the batch formulation compared to the classical approach. Additionally, the new methodology displays wider dissolution specification limits than the classical approach, ensuring that any tablet from the new batch would generate in vivo profiles which its AUC or C max ratio will be out of the 0.8-1.25 range, taking into account the in vitro and in vivo variability of the new batches developed. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. The study of marketed and experimental formulation approaches enabling site-specific delivery of mesalamine in patients with inflammatory bowel disease.

    PubMed

    Kadiyala, Irina; Jacobs, Dylan

    2014-04-01

    This patent review focuses exclusively on the oral delivery of mesalamine (5-ASA) and excludes oral mesalamine pro-drug and rectal delivery formulations. The formulation strategies of marketed formulations (Apriso(®), Asacol(®), Lialda(®) and Pentasa(®)) and non-marketed formulations are reviewed and explained by decoding formulation specifics that enable the site specific delivery for the treatment of inflammatory bowel disease.

  19. Imbedded-Fracture Formulation of THMC Processes in Fractured Media

    NASA Astrophysics Data System (ADS)

    Yeh, G. T.; Tsai, C. H.; Sung, R.

    2016-12-01

    Fractured media consist of porous materials and fracture networks. There exist four approaches to mathematically formulating THMC (Thermal-Hydrology-Mechanics-Chemistry) processes models in the system: (1) Equivalent Porous Media, (2) Dual Porosity or Dual Continuum, (3) Heterogeneous Media, and (4) Discrete Fracture Network. The first approach cannot explicitly explore the interactions between porous materials and fracture networks. The second approach introduces too many extra parameters (namely, exchange coefficients) between two media. The third approach may make the problems too stiff because the order of material heterogeneity may be too much. The fourth approach ignore the interaction between porous materials and fracture networks. This talk presents an alternative approach in which fracture networks are modeled with a lower dimension than the surrounding porous materials. Theoretical derivation of mathematical formulations will be given. An example will be illustrated to show the feasibility of this approach.

  20. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.

  1. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  2. Co-evolving prisoner's dilemma: Performance indicators and analytic approaches

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.

    2017-02-01

    Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.

  3. Strategy Formulation in Small Enterprises: A Developmental Approach.

    ERIC Educational Resources Information Center

    Paton, Robert; Brownlie, Douglas

    1991-01-01

    The Small Company European Analysis Technique is a diagnostic tool that small businesses can use to analyze market opportunities in preparation for 1992. The approach uses small group consensus building as in the Delphi technique and brainstorming to formulate a strategic plan. (SK)

  4. Dual algebraic formulation of differential GPS

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Dur, S.

    2003-05-01

    A new approach to differential GPS is presented. The corresponding theoretical framework calls on elementary concepts of algebraic graph theory. The notion of double difference, which is related to that of closure in the sense of Kirchhoff, is revisited in this context. The Moore-Penrose pseudo-inverse of the closure operator plays a key role in the corresponding dual formulation. This approach, which is very attractive from a conceptual point of view, sheds a new light on the Teunissen formulation.

  5. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.

  6. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  7. Multi Sensor Fusion Using Fitness Adaptive Differential Evolution

    NASA Astrophysics Data System (ADS)

    Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam

    The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).

  8. Understanding the Depth and Richness of the Cultural Context in Career Counseling through the Cultural Formulation Approach (CFA)

    ERIC Educational Resources Information Center

    Heppner, Mary J.; Fu, Chu-Chun

    2010-01-01

    In this article, the authors discuss the Cultural Formulation Approach (CFA) proposed by Leong and his colleagues, and the strong and insightful applications of the approach offered by Leong, Arthur, Juntunen, Byars-Winston, and Flores. They think this model has phenomenal possibilities in providing a methodology for counselors to be able to…

  9. Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Lee, Sangyup

    2015-06-01

    The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  10. Screening Vaccine Formulations in Fresh Human Whole Blood.

    PubMed

    Hakimi, Jalil; Aboutorabian, Sepideh; To, Frederick; Ausar, Salvador F; Rahman, Nausheen; Brookes, Roger H

    2017-01-01

    Monitoring the immunological functionality of vaccine formulations is critical for vaccine development. While the traditional approach using established animal models has been relatively effective, the use of animals is costly and cumbersome, and animal models are not always reflective of a human response. The development of a human-based approach would be a major step forward in understanding how vaccine formulations might behave in humans. Here, we describe a platform methodology using fresh human whole blood (hWB) to monitor adjuvant-modulated, antigen-specific responses to vaccine formulations, which is amenable to analysis by standard immunoassays as well as a variety of other analytical techniques.

  11. Dynamically Reconfigurable Approach to Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalie M.; Lewis, Robert Michael

    2003-01-01

    The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.

  12. Analytical slave-spin mean-field approach to orbital selective Mott insulators

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Kotliar, Gabriel

    2017-09-01

    We use the slave-spin mean-field approach to study particle-hole symmetric one- and two-band Hubbard models in the presence of Hund's coupling interaction. By analytical analysis of the Hamiltonian, we show that the locking of the two orbitals vs orbital selective Mott transition can be formulated within a Landau-Ginzburg framework. By applying the slave-spin mean field to impurity problems, we are able to make a correspondence between impurity and lattice. We also consider the stability of the orbital selective Mott phase to the hybridization between the orbitals and study the limitations of the slave-spin method for treating interorbital tunnelings in the case of multiorbital Bethe lattices with particle-hole symmetry.

  13. Prediction of helicopter rotor noise in hover

    NASA Astrophysics Data System (ADS)

    Kusyumov, A. N.; Mikhailov, S. A.; Garipova, L. I.; Batrakov, A. S.; Barakos, G.

    2015-05-01

    Two mathematical models are used in this work to estimate the acoustics of a hovering main rotor. The first model is based on the Ffowcs Williams-Howkings equations using the formulation of Farassat. An analytical approach is followed for this model, to determine the thickness and load noise contributions of the rotor blade in hover. The second approach allows using URANS and RANS CFD solutions and based on numerical solution of the Ffowcs Williams-Howkings equations. The employed test cases correspond to a model rotor available at the KNRTUKAI aerodynamics laboratory. The laboratory is equipped with a system of acoustic measurements, and comparisons between predictions and measurements are to be attempted as part of this work.

  14. Statistical Interior Tomography

    PubMed Central

    Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.

    2011-01-01

    This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044

  15. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  16. An Application of the Difference Potentials Method to Solving External Problems in CFD

    NASA Technical Reports Server (NTRS)

    Ryaben 'Kii, Victor S.; Tsynkov, Semyon V.

    1997-01-01

    Numerical solution of infinite-domain boundary-value problems requires some special techniques that would make the problem available for treatment on the computer. Indeed, the problem must be discretized in a way that the computer operates with only finite amount of information. Therefore, the original infinite-domain formulation must be altered and/or augmented so that on one hand the solution is not changed (or changed slightly) and on the other hand the finite discrete formulation becomes available. One widely used approach to constructing such discretizations consists of truncating the unbounded original domain and then setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The role of the ABC's is to close the truncated problem and at the same time to ensure that the solution found inside the finite computational domain would be maximally close to (in the ideal case, exactly the same as) the corresponding fragment of the original infinite-domain solution. Let us emphasize that the proper treatment of artificial boundaries may have a profound impact on the overall quality and performance of numerical algorithms. The latter statement is corroborated by the numerous computational experiments and especially concerns the area of CFD, in which external problems present a wide class of practically important formulations. In this paper, we review some work that has been done over the recent years on constructing highly accurate nonlocal ABC's for calculation of compressible external flows. The approach is based on implementation of the generalized potentials and pseudodifferential boundary projection operators analogous to those proposed first by Calderon. The difference potentials method (DPM) by Ryaben'kii is used for the effective computation of the generalized potentials and projections. The resulting ABC's clearly outperform the existing methods from the standpoints of accuracy and robustness, in many cases noticeably speed up the multigrid convergence, and at the same time are quite comparable to other methods from the standpoints of geometric universality and simplicity of implementation.

  17. From research to evidence-informed decision making: a systematic approach

    PubMed Central

    Poot, Charlotte C; van der Kleij, Rianne M; Brakema, Evelyn A; Vermond, Debbie; Williams, Siân; Cragg, Liza; van den Broek, Jos M; Chavannes, Niels H

    2018-01-01

    Abstract Background Knowledge creation forms an integral part of the knowledge-to-action framework aimed at bridging the gap between research and evidence-informed decision making. Although principles of science communication, data visualisation and user-centred design largely impact the effectiveness of communication, their role in knowledge creation is still limited. Hence, this article aims to provide researchers a systematic approach on how knowledge creation can be put into practice. Methods A systematic two-phased approach towards knowledge creation was formulated and executed. First, during a preparation phase the purpose and audience of the knowledge were defined. Subsequently, a developmental phase facilitated how the content is ‘said’ (language) and communicated (channel). This developmental phase proceeded via two pathways: a translational cycle and design cycle, during which core translational and design components were incorporated. The entire approach was demonstrated by a case study. Results The case study demonstrated how the phases in this systematic approach can be operationalised. It furthermore illustrated how created knowledge can be delivered. Conclusion The proposed approach offers researchers a systematic, practical and easy-to-implement tool to facilitate effective knowledge creation towards decision-makers in healthcare. Through the integration of core components of knowledge creation evidence-informed decision making will ultimately be optimized. PMID:29538728

  18. Case Formulation in TADS CBT

    ERIC Educational Resources Information Center

    Rogers, Gregory M.; Reinecke, Mark A.; Curry, John F.

    2005-01-01

    For the Treatment for Adolescents With Depression Study (TADS), a cognitive-behavioral therapy (CBT) manual was developed with the aim of balancing standardization and flexibility. In this article, we describe the manual's case formulation procedures, which served as one major mechanism of flexibility in TADS CBT. We first describe the essential…

  19. Application of Pharmacokinetics and Pharmacodynamics in Product Life Cycle Management. A Case Study with a Carbidopa-Levodopa Extended-Release Formulation.

    PubMed

    Modi, Nishit B

    2017-05-01

    Increasing costs in discovering and developing new molecular entities and the continuing debate on limited company pipelines mean that pharmaceutical companies are under significant pressure to maximize the value of approved products. Life cycle management in the context of drug development comprises activities to maximize the effective life of a product. Life cycle approaches can involve new formulations, new routes of delivery, new indications or expansion of the population for whom the product is indicated, or development of combination products. Life cycle management may provide an opportunity to improve upon the current product through enhanced efficacy or reduced side effects and could expand the therapeutic market for the product. Successful life cycle management may include the potential for superior efficacy, improved tolerability, or a better prescriber or patient acceptance. Unlike generic products where bioequivalence to an innovator product may be sufficient for drug approval, life cycle management typically requires a series of studies to characterize the value of the product. This review summarizes key considerations in identifying product candidates that may be suitable for life cycle management and discusses the application of pharmacokinetics and pharmacodynamics in developing new products using a life cycle management approach. Examples and a case study to illustrate how pharmacokinetics and pharmacodynamics contributed to the selection of dosing regimens, demonstration of an improved therapeutic effect, or regulatory approval of an improved product label are presented.

  20. A PDF projection method: A pressure algorithm for stand-alone transported PDFs

    NASA Astrophysics Data System (ADS)

    Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich

    2015-03-01

    In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.

  1. Implementing Participatory Decision Making in Forest Planning

    NASA Astrophysics Data System (ADS)

    Ananda, Jayanath

    2007-04-01

    Forest policy decisions are often a source of debate, conflict, and tension in many countries. The debate over forest land-use decisions often hinges on disagreements about societal values related to forest resource use. Disagreements on social value positions are fought out repeatedly at local, regional, national, and international levels at an enormous social cost. Forest policy problems have some inherent characteristics that make them more difficult to deal with. On the one hand, forest policy decisions involve uncertainty, long time scales, and complex natural systems and processes. On the other hand, such decisions encompass social, political, and cultural systems that are evolving in response to forces such as globalization. Until recently, forest policy was heavily influenced by the scientific community and various economic models of optimal resource use. However, growing environmental awareness and acceptance of participatory democracy models in policy formulation have forced the public authorities to introduce new participatory mechanisms to manage forest resources. Most often, the efforts to include the public in policy formulation can be described using the lower rungs of Arnstein’s public participation typology. This paper presents an approach that incorporates stakeholder preferences into forest land-use policy using the Analytic Hierarchy Process (AHP). An illustrative case of regional forest-policy formulation in Australia is used to demonstrate the approach. It is contended that applying the AHP in the policy process could considerably enhance the transparency of participatory process and public acceptance of policy decisions.

  2. Field redefinitions and Plebanski formalism for GR

    NASA Astrophysics Data System (ADS)

    Krasnov, Kirill

    2018-07-01

    We point out that there exists a family of transformations acting on BF-type Lagrangians of gravity, with Lagrangians related by such a transformation corresponding to classically equivalent theories. A transformation of this type corresponds to a particular field redefinition. We discuss both the chiral and non-chiral cases. In the chiral case there is a one-parameter, and in the non-chiral case a two-parameter family of such transformations. In the chiral setup, we use these transformations to give an alternative derivation of the chiral BF plus potential formulation of general relativity that was proposed recently. In the non-chiral case, we show that there is a new BF plus potential type formulation of GR. We also make some remarks on the non-chiral pure connection formulation.

  3. State-Space Formulation for Circuit Analysis

    ERIC Educational Resources Information Center

    Martinez-Marin, T.

    2010-01-01

    This paper presents a new state-space approach for temporal analysis of electrical circuits. The method systematically obtains the state-space formulation of nondegenerate linear networks without using concepts of topology. It employs nodal/mesh systematic analysis to reduce the number of undesired variables. This approach helps students to…

  4. A New Approach to Strategy Formulation: Opening the Black Box.

    ERIC Educational Resources Information Center

    Boyd, Lynn; Gupta, Mahesh; Sussman, Lyle

    2001-01-01

    An approach to teaching business strategy formulation uses the thinking process tools of the theory of constraints: current reality tree for situational analysis, evaporating cloud and future reality tree to identify change outcomes, and prerequisite tree and transition tree to identify implementation strategies. (SK)

  5. Tackling regional health inequalities in france by resource allocation : a case for complementary instrumental and process-based approaches?

    PubMed

    Bellanger, Martine M; Jourdain, Alain

    2004-01-01

    This article aims to evaluate the results of two different approaches underlying the attempts to reduce health inequalities in France. In the 'instrumental' approach, resource allocation is based on an indicator to assess the well-being or the quality of life associated with healthcare provision, the argument being that additional resources would respond to needs that could then be treated quickly and efficiently. This governs the distribution of regional hospital budgets. In the second approach, health professionals and users in a given region are involved in a consensus process to define those priorities to be included in programme formulation. This 'procedural' approach is employed in the case of the regional health programmes. In this second approach, the evaluation of the results runs parallel with an analysis of the process using Rawlsian principles, whereas the first approach is based on the classical economic model.At this stage, a pragmatic analysis based on both the comparison of regional hospital budgets during the period 1992-2003 (calculated using a 'RAWP [resource allocation working party]-like' formula) and the evolution of regional health policies through the evaluation of programmes for the prevention of suicide, alcohol-related diseases and cancers provides a partial assessment of the impact of the two types of approaches, the second having a greater effect on the reduction of regional inequalities.

  6. Fusion and Gaussian mixture based classifiers for SONAR data

    NASA Astrophysics Data System (ADS)

    Kotari, Vikas; Chang, KC

    2011-06-01

    Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.

  7. An alternative Biot's displacement formulation for porous materials.

    PubMed

    Dazel, Olivier; Brouard, Bruno; Depollier, Claude; Griffiths, Stéphane

    2007-06-01

    This paper proposes an alternative displacement formulation of Biot's linear model for poroelastic materials. Its advantage is a simplification of the formalism without making any additional assumptions. The main difference between the method proposed in this paper and the original one is the choice of the generalized coordinates. In the present approach, the generalized coordinates are chosen in order to simplify the expression of the strain energy, which is expressed as the sum of two decoupled terms. Hence, new equations of motion are obtained whose elastic forces are decoupled. The simplification of the formalism is extended to Biot and Willis thought experiments, and simpler expressions of the parameters of the three Biot waves are also provided. A rigorous derivation of equivalent and limp models is then proposed. It is finally shown that, for the particular case of sound-absorbing materials, additional simplifications of the formalism can be obtained.

  8. Therapeutic surfactant-stripped frozen micelles

    NASA Astrophysics Data System (ADS)

    Zhang, Yumiao; Song, Wentao; Geng, Jumin; Chitgupi, Upendra; Unsal, Hande; Federizon, Jasmin; Rzayev, Javid; Sukumaran, Dinesh K.; Alexandridis, Paschalis; Lovell, Jonathan F.

    2016-05-01

    Injectable hydrophobic drugs are typically dissolved in surfactants and non-aqueous solvents which can induce negative side-effects. Alternatives like `top-down' fine milling of excipient-free injectable drug suspensions are not yet clinically viable and `bottom-up' self-assembled delivery systems usually substitute one solubilizing excipient for another, bringing new issues to consider. Here, we show that Pluronic (Poloxamer) block copolymers are amenable to low-temperature processing to strip away all free and loosely bound surfactant, leaving behind concentrated, kinetically frozen drug micelles containing minimal solubilizing excipient. This approach was validated for phylloquinone, cyclosporine, testosterone undecanoate, cabazitaxel and seven other bioactive molecules, achieving sizes between 45 and 160 nm and drug to solubilizer molar ratios 2-3 orders of magnitude higher than current formulations. Hypertonic saline or co-loaded cargo was found to prevent aggregation in some cases. Use of surfactant-stripped micelles avoided potential risks associated with other injectable formulations. Mechanistic insights are elucidated and therapeutic dose responses are demonstrated.

  9. Teaching Cultural Competence to Psychiatry Residents: Seven Core Concepts and Their Implications for Therapeutic Technique.

    PubMed

    Pena, Jose M; Manguno-Mire, Gina; Kinzie, Erik; Johnson, Janet E

    2016-04-01

    The authors describe the Tulane Model for teaching cultural competence to psychiatry residents in order to outline an innovative approach to curricula development in academic psychiatry. The authors focus on the didactic experience that takes place during the first and second postgraduate years and present seven core concepts that should inform the emerging clinician's thinking in the formulation of every clinical case. The authors discuss the correspondence between each core concept and the Outline for Cultural Formulation, introduced in Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV and updated in DSM-5. The authors illustrate how each of the core concepts is utilized as a guideline for teaching residents a process for eliciting culturally relevant information from their patients and their personal histories and how to apply that knowledge in the assessment and treatment of patients in clinical settings.

  10. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  11. Classical phase space and Hadamard states in the BRST formalism for gauge field theories on curved spacetime

    NASA Astrophysics Data System (ADS)

    Wrochna, Michał; Zahn, Jochen

    We investigate linearized gauge theories on globally hyperbolic spacetimes in the BRST formalism. A consistent definition of the classical phase space and of its Cauchy surface analogue is proposed. We prove that it is isomorphic to the phase space in the ‘subsidiary condition’ approach of Hack and Schenkel in the case of Maxwell, Yang-Mills, and Rarita-Schwinger fields. Defining Hadamard states in the BRST formalism in a standard way, their existence in the Maxwell and Yang-Mills case is concluded from known results in the subsidiary condition (or Gupta-Bleuler) formalism. Within our framework, we also formulate criteria for non-degeneracy of the phase space in terms of BRST cohomology and discuss special cases. These include an example in the Yang-Mills case, where degeneracy is not related to a non-trivial topology of the Cauchy surface.

  12. Comprehensive preoperative staging system for endoscopic single and multicorridor approaches to juvenile nasal angiofibromas

    PubMed Central

    Janakiram, Trichy N.; Sharma, Shilpee B.; Kasper, Ekkehard; Deshmukh, Onkar; Cherian, Iype

    2017-01-01

    Background: Juvenile nasal angiofibromas (JNA) is a benign lesion with high vascularity and propensity of bone erosion leading to skull base invasion and intracranial extension. It is known to involve multiple compartments, which are often surgically difficult to access. With evolution in surgical expertise and technical innovations, endoscopic and endoscopic-assisted management has become the preferred choice of surgical management. Over the last four decades, various staging systems have been proposed, which are largely based on the extent of nasal angiofibroma. However, no clear guidelines exist for the stage-appropriate surgical management. In this study, we aim to formulate a novel staging system based on the analysis of high quality preoperative imaging and propose detailed surgical guidelines related to disease stages as observed in 242 primary cases of JNA. Methods: A retrospective analysis of the case records of 242 primary JNA cases was performed at our center. Patients were staged according to various existing staging systems as well as our own new staging system, and outcome variables were compared with respect to intraoperative blood loss, multiple staged operations, and tumor recurrences. Operative records were studied and precise endoscopic surgical guidelines were formulated for each stage. Results: Comparing the intraoperative blood loss seen in stages of various classifications, it was found that intraoperative blood loss correlated best and statistically significantly with stages in the newly proposed Janakiram staging system when compared to the existing staging systems. Staged operations were performed in a total of 7/242 patients, and there was a significant association between the requirement of a staged operation and tumor extent (Fischer's exact test, P < 0.001). Tumor recurrence was seen in 22 cases and the pterygoid wedge was found to be the most frequent site of recurrence initially. As the extent of resection improved with better surgical technique over time, recurrences were only found in superior orbital fissure, around the internal carotid artery, and in the middle cranial fossa. Conclusion: This new Janakiram staging system is based on preoperative imaging data from one of the largest JNA case series reported thus far. Respective guidelines reliably stratify patients into treatment groups with definite surgical approaches and predicts outcome. Improved surgical approaches in the modern endoscopic era have redefined JNA management with improved outcome. This study shows the importance of precise presurgical imaging and the choice of the most suitable surgical approach in reducing morbidity and mortality in JNA surgery. PMID:28540121

  13. Locating an imaging radar in Canada for identifying spaceborne objects

    NASA Astrophysics Data System (ADS)

    Schick, William G.

    1992-12-01

    This research presents a study of the maximal coverage p-median facility location problem as applied to the location of an imaging radar in Canada for imaging spaceborne objects. The classical mathematical formulation of the maximal coverage p-median problem is converted into network-flow with side constraint formulations that are developed using a scaled down version of the imaging radar location problem. Two types of network-flow with side constraint formulations are developed: a network using side constraints that simulates the gains in a generalized network; and a network resembling a multi-commodity flow problem that uses side constraints to force flow along identical arcs. These small formulations are expanded to encompass a case study using 12 candidate radar sites, and 48 satellites divided into three states. SAS/OR PROC NETFLOW was used to solve the network-flow with side constraint formulations. The case study show that potential for both formulations, although the simulated gains formulation encountered singular matrix computational difficulties as a result of the very organized nature of its side constraint matrix. The multi-commodity flow formulation, when combined with equi-distribution of flow constraints, provided solutions for various values of p, the number of facilities to be selected.

  14. Geo-Statistical Approach to Estimating Asteroid Exploration Parameters

    NASA Technical Reports Server (NTRS)

    Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles

    2011-01-01

    NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.

  15. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    PubMed Central

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707

  16. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China.

    PubMed

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  17. Mismatch removal via coherent spatial relations

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen

    2014-07-01

    We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.

  18. Level set formulation of two-dimensional Lagrangian vortex detection methods

    NASA Astrophysics Data System (ADS)

    Hadjighasem, Alireza; Haller, George

    2016-10-01

    We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.

  19. Historical development of administration architecture in Malaysia (15th-21st century)

    NASA Astrophysics Data System (ADS)

    Mohidin, H. H. B.; Ismail, A. S.

    2014-02-01

    The main purpose of this paper is to document the development of the state administration building in Malaysia before and after the independence era, in relation to the evolutionary period of Malaysia's political, social and economic history. Multiple case study approach [19] is applied by referring to six prominent case studies to represent state administrative buildings from various phases of Malaysian history beginning from 14th century to 21st century as exemplar. Since this paper formulates new ways to approach and describes state administrative building design and factors that influence them, it uses interpretivism paradigm and (semiotics) as methodological approach to study the relationship between the building design and contextual elements. This paper, therefore, offers new insights, which not only add to knowledge in this field by widening and strengthening the understanding of state administrative architecture in Malaysia, but also are valuable for range of associated fields including architectural semiotics and non verbal communication. This is because this paper reveals deep understandings of the built form and material environment operating as a sign in a cultural and social context.

  20. Real-time energy-saving metro train rescheduling with primary delay identification

    PubMed Central

    Li, Keping; Schonfeld, Paul

    2018-01-01

    This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471

  1. Variational formulation of macroparticle models for electromagnetic plasma simulations

    DOE PAGES

    Stamm, Alexander B.; Shadwick, Bradley A.; Evstatiev, Evstati G.

    2014-06-01

    A variational method is used to derive a self-consistent macroparticle model for relativistic electromagnetic kinetic plasma simulations. Extending earlier work, discretization of the electromagnetic Low Lagrangian is performed via a reduction of the phase-space distribution function onto a collection of finite-sized macroparticles of arbitrary shape and discretization of field quantities onto a spatial grid. This approach may be used with lab frame coordinates or moving window coordinates; the latter can greatly improve computational efficiency for studying some types of laser-plasma interactions. The primary advantage of the variational approach is the preservation of Lagrangian symmetries, which in our case leads tomore » energy conservation and thus avoids difficulties with grid heating. In addition, this approach decouples particle size from grid spacing and relaxes restrictions on particle shape, leading to low numerical noise. The variational approach also guarantees consistent approximations in the equations of motion and is amenable to higher order methods in both space and time. We restrict our attention to the 1.5-D case (one coordinate and two momenta). Lastly, simulations are performed with the new models and demonstrate energy conservation and low noise.« less

  2. Diagnostic Criteria in Clinical Settings: DSM-IV and Cultural Competence

    ERIC Educational Resources Information Center

    Christensen, Michelle

    2001-01-01

    Historically, the Diagnostic and Statistical Manual of Mental Disorder (DSM) gave little attention to cultural variations in mental disorder. DSM-IV includes a cultural case formulation outline. The current paper presents a case formulation of an American Indian client who presented with depressive symptoms and a history of substance dependence.…

  3. On the energy integral formulation of gravitational potential differences from satellite-to-satellite tracking

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Shang, K.; Jekeli, C.; Shum, C. K.

    2015-04-01

    Two approaches have been formulated to compute the gravitational potential difference using low-low satellite-to-satellite tracking data based on energy integral: one in the geocentric inertial reference system, and the other in the terrestrial reference system. The focus of this work is on the approach in the geocentric inertial reference system, where a potential rotation term appears in addition to the potential term. In former formulations, the contribution of the time-variable components of the gravitational potential to the potential term was included, but their contribution to the potential rotation term was neglected. In this work, an improvement to the former formulations is made by reformulating the potential rotation term to include the contribution of the time-variable components of the gravitational potential. A simulation shows that our more accurate formulation of the potential rotation term is necessary to achieve the accuracy for recovering the temporal variation of the Earth's gravity field, such as for use to the Gravity Recovery And Climate Experiment GRACE observation data based on this approach.

  4. Variational approach to the volume viscosity of fluids

    NASA Astrophysics Data System (ADS)

    Zuckerwar, Allan J.; Ash, Robert L.

    2006-04-01

    The variational principle of Hamilton is applied to develop an analytical formulation to describe the volume viscosity in fluids. The procedure described here differs from those used in the past in that a dissipative process is represented by the chemical affinity and progress variable (sometimes called "order parameter") of a reacting species. These state variables appear in the variational integral in two places: first, in the expression for the internal energy, and second, in a subsidiary condition accounting for the conservation of the reacting species. As a result of the variational procedure, two dissipative terms appear in the Navier-Stokes equation. The first is the traditional volume viscosity term, proportional to the dilatational component of velocity; the second term is proportional to the material time derivative of the pressure gradient. Values of the respective volume viscosity coefficients are determined by applying the resulting volume-viscous Navier-Stokes equation to the case of acoustical propagation and then comparing expressions for the dispersion and absorption of sound. The formulation includes the special case of equilibration of the translational degrees of freedom. As examples, values are tabulated for dry and humid air, argon, and sea water.

  5. Interval linear programming model for long-term planning of vehicle recycling in the Republic of Serbia under uncertainty.

    PubMed

    Simic, Vladimir; Dimitrijevic, Branka

    2015-02-01

    An interval linear programming approach is used to formulate and comprehensively test a model for optimal long-term planning of vehicle recycling in the Republic of Serbia. The proposed model is applied to a numerical case study: a 4-year planning horizon (2013-2016) is considered, three legislative cases and three scrap metal price trends are analysed, availability of final destinations for sorted waste flows is explored. Potential and applicability of the developed model are fully illustrated. Detailed insights on profitability and eco-efficiency of the projected contemporary equipped vehicle recycling factory are presented. The influences of the ordinance on the management of end-of-life vehicles in the Republic of Serbia on the vehicle hulks procuring, sorting generated material fractions, sorted waste allocation and sorted metals allocation decisions are thoroughly examined. The validity of the waste management strategy for the period 2010-2019 is tested. The formulated model can create optimal plans for procuring vehicle hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Obtained results are valuable for supporting the construction and/or modernisation process of a vehicle recycling system in the Republic of Serbia. © The Author(s) 2015.

  6. Hierarchical and non-hierarchical {lambda} elements for one dimensional problems with unknown strength of singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, K.K.; Surana, K.S.

    1996-10-01

    This paper presents a new and general procedure for designing hierarchical and non-hierarchical special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is unknown. The {lambda} element formulations presented here permit correct numerical simulation of linear as well as non-linear singular problems without a priori knowledge of the strength of the singularity. A procedure is also presented for determining the exact strength of the singularity using the converged solution. It is shown that in special instances, the general formulation of {lambda} elements can also be made hierarchical. The {lambda} elements presented here aremore » of type C{sup 0} and provide C{sup 0} inter-element continuity with p-version elements. One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Since in this case {lambda}{sub i} are known, this problem provides a good example for investigating the performance of the formulation proposed here. Least squares approach (or Least Squares Finite Element Formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical studies are presented for radially inward flow of an upper convected Maxwell fluid with inner radius r{sub i} = .1 and .01 etc. and Deborah number De = 2.« less

  7. On the Assessment of Acoustic Scattering and Shielding by Time Domain Boundary Integral Equation Solutions

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.

    2016-01-01

    Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).

  8. Coupled variational formulations of linear elasticity and the DPG methodology

    NASA Astrophysics Data System (ADS)

    Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick

    2017-11-01

    This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.

  9. A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models.

    PubMed

    Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio

    2013-08-01

    The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.

  10. Statistical fusion of continuous labels: identification of cardiac landmarks

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.

    2011-03-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  11. Finite element formulation of fluctuating hydrodynamics for fluids filled with rigid particles using boundary fitted meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Corato, M., E-mail: marco.decorato@unina.it; Slot, J.J.M., E-mail: j.j.m.slot@tue.nl; Hütter, M., E-mail: m.huetter@tue.nl

    In this paper, we present a finite element implementation of fluctuating hydrodynamics with a moving boundary fitted mesh for treating the suspended particles. The thermal fluctuations are incorporated into the continuum equations using the Landau and Lifshitz approach [1]. The proposed implementation fulfills the fluctuation–dissipation theorem exactly at the discrete level. Since we restrict the equations to the creeping flow case, this takes the form of a relation between the diffusion coefficient matrix and friction matrix both at the particle and nodal level of the finite elements. Brownian motion of arbitrarily shaped particles in complex confinements can be considered withinmore » the present formulation. A multi-step time integration scheme is developed to correctly capture the drift term required in the stochastic differential equation (SDE) describing the evolution of the positions of the particles. The proposed approach is validated by simulating the Brownian motion of a sphere between two parallel plates and the motion of a spherical particle in a cylindrical cavity. The time integration algorithm and the fluctuating hydrodynamics implementation are then applied to study the diffusion and the equilibrium probability distribution of a confined circle under an external harmonic potential.« less

  12. Modeling Anisotropic Elastic Wave Propagation in Jointed Rock Masses

    NASA Astrophysics Data System (ADS)

    Hurley, R.; Vorobiev, O.; Ezzedine, S. M.; Antoun, T.

    2016-12-01

    We present a numerical approach for determining the anisotropic stiffness of materials with nonlinearly-compliant joints capable of sliding. The proposed method extends existing ones for upscaling the behavior of a medium with open cracks and inclusions to cases relevant to natural fractured and jointed rocks, where nonlinearly-compliant joints can undergo plastic slip. The method deviates from existing techniques by incorporating the friction and closure states of the joints, and recovers an anisotropic elastic form in the small-strain limit when joints are not sliding. We present the mathematical formulation of our method and use Representative Volume Element (RVE) simulations to evaluate its accuracy for joint sets with varying complexity. We then apply the formulation to determine anisotropic elastic constants of jointed granite found at the Nevada Nuclear Security Site (NNSS) where the Source Physics Experiments (SPE), a campaign of underground chemical explosions, are performed. Finally, we discuss the implementation of our numerical approach in a massively parallel Lagrangian code Geodyn-L and its use for studying wave propagation from underground explosions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.

    PubMed

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A

    2011-01-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  14. A Decentralized Approach to the Formulation of Hypotheses: A Hierarchical Structural Model for a Prion Self-Assembled System

    NASA Astrophysics Data System (ADS)

    Wang, Mingyang; Zhang, Feifei; Song, Chao; Shi, Pengfei; Zhu, Jin

    2016-07-01

    Innovation in hypotheses is a key transformative driver for scientific development. The conventional centralized hypothesis formulation approach, where a dominant hypothesis is typically derived from a primary phenomenon, can, inevitably, impose restriction on the range of conceivable experiments and legitimate hypotheses, and ultimately impede understanding of the system of interest. We report herein the proposal of a decentralized approach for the formulation of hypotheses, through initial preconception-free phenomenon accumulation and subsequent reticular logical reasoning processes. The two-step approach can provide an unbiased, panoramic view of the system and as such should enable the generation of a set of more coherent and therefore plausible hypotheses. As a proof-of-concept demonstration of the utility of this open-ended approach, a hierarchical model has been developed for a prion self-assembled system, allowing insight into hitherto elusive static and dynamic features associated with this intriguing structure.

  15. Levothyroxine Tablet Malabsorption Associated with Gastroparesis Corrected with Gelatin Capsule Formulation.

    PubMed

    Reardon, David P; Yoo, Peter S

    2016-01-01

    Treatment of hypothyroidism with levothyroxine sodium often requires multiple dose adjustments and can be complicated by patients with gastric and intestinal dysfunction that limits absorption. In these cases, doses are often titrated higher than commonly used in clinical practice. Multiple formulations of levothyroxine are currently available and some may be preferred in cases of malabsorption. We report a case of a 42-year-old female who presented with a living unrelated kidney transplant evaluation with myxedema while being treated with levothyroxine sodium tablets. She was noted to have gastroparesis secondary to Type I diabetes mellitus which may have contributed to levothyroxine malabsorption. Changing to a gelatin capsule formulation quickly corrected her thyroid function assays. This case suggests that gastroparesis may affect absorption of levothyroxine tablets and the gelatin capsules may be an effective alternative therapy.

  16. Application of the KeratinoSens™ assay for assessing the skin sensitization potential of agrochemical active ingredients and formulations.

    PubMed

    Settivari, Raja S; Gehen, Sean C; Amado, Ricardo Acosta; Visconti, Nicolo R; Boverhof, Darrell R; Carney, Edward W

    2015-07-01

    Assessment of skin sensitization potential is an important component of the safety evaluation process for agrochemical products. Recently, non-animal approaches including the KeratinoSens™ assay have been developed for predicting skin sensitization potential. Assessing the utility of the KeratinoSens™ assay for use with multi-component mixtures such as agrochemical formulations has not been previously evaluated and is a significant need. This study was undertaken to evaluate the KeratinoSens™ assay prediction potential for agrochemical formulations. The assay was conducted for 8 agrochemical active ingredients (AIs) including 3 sensitizers (acetochlor, meptyldinocap, triclopyr), 5 non-sensitizers (aminopyralid, clopyralid, florasulam, methoxyfenozide, oxyfluorfen) and 10 formulations for which in vivo sensitization data were available. The KeratinoSens™ correctly predicted the sensitization potential of all the AIs. For agrochemical formulations it was necessary to modify the standard assay procedure whereby the formulation was assumed to have a common molecular weight. The resultant approach correctly predicted the sensitization potential for 3 of 4 sensitizing formulations and all 6 non-sensitizing formulations when compared to in vivo data. Only the meptyldinocap-containing formulation was misclassified, as a result of high cytotoxicity. These results demonstrate the promising utility of the KeratinoSens™ assay for evaluating the skin sensitization potential of agrochemical AIs and formulations. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  18. Depression after traumatic brain injury: a biopsychosocial cultural perspective.

    PubMed

    Roy, Durga; Jayaram, Geetha; Vassila, Alex; Keach, Shari; Rao, Vani

    2015-02-01

    There are several challenges in diagnosing and treating mental illness amongst South Asians. Often times, formulating a patient's case presentation cannot adequately be accomplished strictly using a biopsychosocial model. The cultural components play an imperative role in explaining certain psychiatric symptoms and can guide treatment. With the growing population of immigrants coming to the United States, many of which require treatment for mental illness, it is essential that clinicians be cognizant in incorporating cultural perspectives when treating such patients. The authors describe the case of a 24-year old South Asian male who suffered an exacerbation of a depressive syndrome after a traumatic brain injury. Using a biopsychosocial cultural approach, this case highlights how South Asian cultural values can contribute to and incite psychiatric symptoms while simultaneously providing protective drivers for treatment outcomes. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Nonclassical acoustics

    NASA Technical Reports Server (NTRS)

    Kentzer, C. P.

    1976-01-01

    A statistical approach to sound propagation is considered in situations where, due to the presence of large gradients of properties of the medium, the classical (deterministic) treatment of wave motion is inadequate. Mathematical methods for wave motions not restricted to small wavelengths (analogous to known methods of quantum mechanics) are used to formulate a wave theory of sound in nonuniform flows. Nonlinear transport equations for field probabilities are derived for the limiting case of noninteracting sound waves and it is postulated that such transport equations, appropriately generalized, may be used to predict the statistical behavior of sound in arbitrary flows.

  20. Phase equilibria in polymer blend thin films: A Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Souche, M.; Clarke, N.

    2009-12-01

    We propose a Hamiltonian formulation of the Flory-Huggins-de Gennes theory describing a polymer blend thin film. We then focus on the case of 50:50 polymer blends confined between antisymmetric walls. The different phases of the system and the transitions between them, including finite-size effects, are systematically studied through their relation with the geometry of the Hamiltonian flow in phase space. This method provides an easy and efficient way, with strong graphical insight, to infer the qualitative physical behavior of polymer blend thin films.

  1. On Hybrid and mixed finite element methods

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1981-01-01

    Three versions of the assumed stress hybrid model in finite element methods and the corresponding variational principles for the formulation are presented. Examples of rank deficiency for stiffness matrices by the hybrid stress model are given and their corresponding kinematic deformation modes are identified. A discussion of the derivation of general semi-Loof elements for plates and shells by the hybrid stress method is given. It is shown that the equilibrium model by Fraeijs de Veubeke can be derived by the approach of the hybrid stress model as a special case of semi-Loof elements.

  2. [Clinical ethics consultation - an integrative model for practice and reflection].

    PubMed

    Reiter-Theil, Stella

    2008-07-01

    Broad evidence exists that health care professionals are facing ethical difficulties in patient care demanding a spectrum of useful ethics support services. Clinical ethics consultation is one of these forms of ethics support being effective in the acute setting. An authentic case is presented as an illustration. We introduce an integrative model covering the activities being characteristic for ethics consultation and going beyond "school"-specific approaches. Finally, we formulate some do's and don'ts of ethics consultation that are considered to be key issues for successful practice.

  3. An improved computational approach for multilevel optimum design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    A penalty-function algorithm employing Newton's method with approximate second derivatives (Haftka and Starnes, 1980) is developed for two-level hierarchical design optimization problems. The difficulties posed by discontinuous behavior in typical multilevel problems are explained and illustrated for the case of a three-bar truss; the algorithm is formulated; and its advantages are demonstrated in the problem of a portal framework having three beams (described by six cross-section parameters), subjected to two loading conditions, and to be constructed in six different materials for comparison. The final design parameters are listed in a table.

  4. Positive spaces, generalized semi-densities, and quantum interactions

    NASA Astrophysics Data System (ADS)

    Canarutto, Daniel

    2012-03-01

    The basics of quantum particle physics on a curved Lorentzian background are expressed in a formulation which has original aspects and exploits some non-standard mathematical notions. In particular, positive spaces and generalized semi-densities (in a distributional sense) are shown to link, in a natural way, discrete multi-particle spaces to distributional bundles of quantum states. The treatment of spinor and boson fields is partly original also from an algebraic point of view and suggests a non-standard approach to quantum interactions. The case of electroweak interactions provides examples.

  5. On the BRST Quantization of the Massless Bosonic Particle in Twistor-Like Formulation

    NASA Astrophysics Data System (ADS)

    Bandos, Igor; Maznytsia, Alexey; Rudychev, Igor; Sorokin, Dmitri

    We study some features of bosonic-particle path-integral quantization in a twistor-like approach by the use of the BRST-BFV-quantization prescription. In the course of the Hamiltonian analysis we observe links between various formulations of the twistor-like particle by performing a conversion of the Hamiltonian constraints of one formulation to another. A particular feature of the conversion procedure applied to turn the second-class constraints into first-class constraints is that the simplest Lorentz-covariant way to do this is to convert a full mixed set of the initial first- and second-class constraints rather than explicitly extracting and converting only the second-class constraints. Another novel feature of the conversion procedure applied below is that in the case of the D = 4 and D = 6 twistor-like particle the number of new auxiliary Lorentz-covariant coordinates, which one introduces to get a system of first-class constraints in an extended phase space, exceeds the number of independent second-class constraints of the original dynamical system. We calculate the twistor-like particle propagator in D = 3,4,6 space-time dimensions and show that it coincides with that of a conventional massless bosonic particle.

  6. Generalized reference fields and source interpolation for the difference formulation of radiation transport

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham

    2010-03-01

    In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.

  7. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  8. High drug loading self-microemulsifying/micelle formulation: design by high-throughput formulation screening system and in vivo evaluation.

    PubMed

    Sakai, Kenichi; Obata, Kouki; Yoshikawa, Mayumi; Takano, Ryusuke; Shibata, Masaki; Maeda, Hiroyuki; Mizutani, Akihiko; Terada, Katsuhide

    2012-10-01

    To design a high drug loading formulation of self-microemulsifying/micelle system. A poorly-soluble model drug (CH5137291), 8 hydrophilic surfactants (HS), 10 lipophilic surfactants (LS), 5 oils, and PEG400 were used. A high loading formulation was designed by a following stepwise approach using a high-throughput formulation screening (HTFS) system: (1) an oil/solvent was selected by solubility of the drug; (2) a suitable HS for highly loading was selected by the screenings of emulsion/micelle size and phase stability in binary systems (HS, oil/solvent) with increasing loading levels; (3) a LS that formed a broad SMEDDS/micelle area on a phase diagram containing the HS and oil/solvent was selected by the same screenings; (4) an optimized formulation was selected by evaluating the loading capacity of the crystalline drug. Aqueous solubility behavior and oral absorption (Beagle dog) of the optimized formulation were compared with conventional formulations (jet-milled, PEG400). As an optimized formulation, d-α-tocopheryl polyoxyethylene 1000 succinic ester: PEG400 = 8:2 was selected, and achieved the target loading level (200 mg/mL). The formulation formed fine emulsion/micelle (49.1 nm), and generated and maintained a supersaturated state at a higher level compared with the conventional formulations. In the oral absorption test, the area under the plasma concentration-time curve of the optimized formulation was 16.5-fold higher than that of the jet-milled formulation. The high loading formulation designed by the stepwise approach using the HTFS system improved the oral absorption of the poorly-soluble model drug.

  9. Equivalence between three scattering formulations for ultrasonic wave propagation in particulate mixtures

    NASA Astrophysics Data System (ADS)

    Challis, R. E.; Tebbutt, J. S.; Holmes, A. K.

    1998-12-01

    The aim of this paper is to present a unified approach to the calculation of the complex wavenumber for a randomly distributed ensemble of homogeneous isotropic spheres suspended in a homogeneous isotropic continuum. Three classical formulations of the diffraction problem for a compression wave incident on a single particle are reviewed; the first is for liquid particles in a liquid continuum (Epstein and Carhart), the second for solid or liquid particles in a liquid continuum (Allegra and Hawley), and the third for solid particles in a solid continuum (Ying and Truell). Equivalences between these formulations are demonstrated and it is shown that the Allegra and Hawley formulation can be adapted to provide a basis for calculation in all three regimes. The complex wavenumber that results from an ensemble of such scatterers is treated using the formulations of Foldy (simple forward scattering), Waterman and Truell, and Lloyd and Berry (multiple scattering). The analysis is extended to provide an approximation for the case of a distribution of particle sizes in the mixture. A number of experimental measurements using a broadband spectrometric technique (reported elsewhere) to obtain the attenuation coefficient and phase velocity as functions of frequency are presented for various mixtures of differing contrasts in physical properties between phases in order to provide a comparison with theory. The materials used were aqueous suspensions of polystyrene spheres, silica spheres, iron spheres, 0022-3727/31/24/012/img1 pigment (AHR), droplets of 1-bromohexadecane, and a suspension of talc particles in a cured epoxy resin.

  10. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    PubMed Central

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  11. An examination of three methods of psychodynamic formulation based on the same videotaped interview.

    PubMed

    Perry, J C; Luborsky, L; Silberschatz, G; Popp, C

    1989-08-01

    While psychodynamic theory and therapy are approaching their centennial, the science of psychodynamics is still in an earlier developmental stage. Any scientific field generates the most controversy and excitement when it is still developing. For psychodynamic psychology this means that its basic units of observation as well as its rules for justifying clinical inference in formulating and testing dynamic hypotheses require more development. In short, we are still evaluating different methods for both discovering and validating psychodynamic propositions. This is especially true for central features of dynamic psychology, including intrapsychic conflict, relationships, and transference patterns. This report compares three different methods for making a dynamic case formulation: 1) the Core Conflictual Relationship Theme (CCRT) of Luborsky (Crits-Christoph and Luborsky 1985a,b; Luborsky 1976, 1977, 1984, and companion paper in this issue; Levine and Luborsky 1981), 2) the Plan Diagnosis (PD) method of Silberschatz, Curtis and colleagues of the Mount Zion group (Caston 1986; Curtis and Silberschatz 1986; Rosenberg et al. 1986; Curtis et al. 1988) and, 3) the Idiographic Conflict Formulation (ICF) of Perry and Cooper (1985, 1986, and companion paper in this issue). Each has a slightly different focus. The CCRT focuses on relationship patterns as the central feature of individual dynamics and transference in or out of the treatment situation. The Plan Diagnosis focuses on dynamic features related to transference, resistance and insight in therapy. The Idiographic Conflict Formulation focuses on stress and internal conflict, and the individual's adaptation to them in or out of treatment.

  12. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    PubMed

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  13. What is the Nondominated Formulation? A Demonstration of de Novo Water Supply Portfolio Planning Under Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Kasprzyk, J. R.; Reed, P. M.; Characklis, G. W.; Kirsch, B. R.

    2010-12-01

    This paper proposes and demonstrates a new interactive framework for sensitivity-informed de Novo programming, in which a learning approach to formulating decision problems can confront the deep uncertainty within water management problems. The framework couples global sensitivity analysis using Sobol’ variance decomposition with multiobjective evolutionary algorithms (MOEAs) to generate planning alternatives and test their robustness to new modeling assumptions and scenarios. We explore these issues within the context of a risk-based water supply management problem, where a city seeks the most efficient use of a water market. The case study examines a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas, using both a 10-year planning horizon and an extreme single-year drought scenario. The city’s water supply portfolio comprises a volume of permanent rights to reservoir inflows and use of a water market through anticipatory thresholds for acquiring transfers of water through optioning and spot leases. Diagnostic information from the Sobol’ variance decomposition is used to create a sensitivity-informed problem formulation testing different decision variable configurations, with tradeoffs for the formulation solved using a MOEA. Subsequent analysis uses the drought scenario to expose tradeoffs between long-term and short-term planning and illustrate the impact of deeply uncertain assumptions on water availability in droughts. The results demonstrate water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market and show how to adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.

  14. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    PubMed

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  15. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    NASA Astrophysics Data System (ADS)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  16. Protective Effect of Contemporary Pertussis Vaccines: A Systematic Review and Meta-analysis.

    PubMed

    Fulton, T Roice; Phadke, Varun K; Orenstein, Walter A; Hinman, Alan R; Johnson, Wayne D; Omer, Saad B

    2016-05-01

    Acellular pertussis (aP) and whole-cell (wP) pertussis vaccines are presumed to have similar short-term (<3 years after completion of the primary series) efficacy. However, vaccine effect varies between individual pertussis vaccine formulations, and many originally studied formulations are now unavailable. An updated analysis of the short-term protective effect of pertussis vaccines limited to formulations currently on the market in developed countries is needed. We conducted a systematic review and meta-analysis of published studies that evaluated pertussis vaccine efficacy or effectiveness within 3 years after completion (>3 doses) of a primary series of a currently available aP or wP vaccine formulation. The primary outcome was based on the World Health Organization (WHO) clinical case definitions for pertussis. Study quality was assessed using the approach developed by the Child Health Epidemiology Research Group. We determined overall effect sizes using random-effects meta-analyses, stratified by vaccine (aP or wP) and study (efficacy or effectiveness) type. Meta-analysis of 2 aP vaccine efficacy studies (assessing the 3-component GlaxoSmithKline and 5-component Sanofi-Pasteur formulations) yielded an overall aP vaccine efficacy of 84% (95% confidence interval [CI], 81%-87%). Meta-analysis of 3 wP vaccine effectiveness studies (assessing the Behringwerke, Pasteur/Mérieux, and SmithKline Beecham formulations) yielded an overall wP vaccine effectiveness of 94% (95% CI, 88%-97%) (bothI(2)= 0%). Although all contemporary aP and wP formulations protect against pertussis disease, in this meta-analysis the point estimate for short-term protective effect against WHO-defined pertussis in young children was lower for currently available aP vaccines than wP vaccines. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  17. Computational cost of two alternative formulations of Cahn-Hilliard equations

    NASA Astrophysics Data System (ADS)

    Paszyński, Maciej; Gurgul, Grzegorz; Łoś, Marcin; Szeliga, Danuta

    2018-05-01

    In this paper we propose two formulations of Cahn-Hilliard equations, which have several applications in cancer growth modeling and material science phase-field simulations. The first formulation uses one C4 partial differential equations (PDEs) the second one uses two C2 PDEs. Finally, we compare the computational costs of direct solvers for both formulations, using the refined isogeometric analysis (rIGA) approach.

  18. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  19. Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Bielak, J.

    2008-12-01

    I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.

  20. Self Realization and Meaning Making in the Face of Adversity: A Eudaimonic Approach to Human Resilience

    PubMed Central

    Ryff, Carol D.

    2014-01-01

    This article considers a eudaimonic approach to psychological well-being built on the integration of developmental, existential and humanistic formulations as well as distant writings of Aristotle. Eudaimonia emphasizes meaning-making, self realization and growth, quality connections to others, self-knowledge, managing life, and marching to one's own drummer. These qualities may be of particular importance in the confrontation with significant life challenges. Prior formulations of resilience are reviewed to underscore the unique features of a eudaimonic approach. Empirical findings on meaning making and self realization are then reviewed to document the capacity of some to maintain high well-being in the face of socioeconomic inequality, the challenges of aging, and in dealing with specific challenges (child abuse, cancer, loss of spouse). Moreover, those who sustain or deepen their well-being as they deal with adversity, show better health profiles, thereby underscoring broader benefits of eudaimonia. How meaning is made and personal capacities realized in the confrontation with challenge is revealed by narrative accounts. Thus, the latter half of the article illustrates human resilience in action via the personal stories of three individuals (Mark Mathabane, Ben Mattlin, Victor Frankl) who endured unimaginable hardship, but prevailed and grew in the face of it. The essential roles of strong social ties and the capacity to derive meaning and realize personal growth in grappling with adversity are unmistakable in all three cases. PMID:25435804

  1. SU-F-E-15: Initial Experience Implementing a Case Method Teaching Approach to Radiation Oncology Physics Residents, Graduate Students and Doctorate of Medical Physics Students

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, A

    Purpose: Case Method Teaching approach is a teaching tool used commonly in business school to challenge students with real-world situations—i.e. cases. The students are placed in the role of the decision maker and have to provide a solution based on the multitude of information provided. Specifically, students must develop an ability to quickly make sense of a complex problem, provide a solution incorporating all of the objectives (at time conflicting) and constraints, and communicate that solution in a succinct, professional and effective manner. The validity of the solution is highly dependent on the auxiliary information provided in the case andmore » the basic didactic knowledge of the student. A Case Method Teaching approach was developed and implemented into an on-going course focused on AAPM Task Group reports at UTHSCSA. Methods: A current course at UTHSCSA reviews and discusses 15 AAPM Task Group reports per semester. The course is structured into three topic modules: Imaging QA, Stereotactic Radiotherapy, and Special Patient Measurements—i.e. pacemakers, fetal dose. After a topic module is complete, the students are divided into groups (2–3 people) and are asked to review a case study related to the module topic. Students then provide a solution presented in an executive summary and class presentation. Results: Case studies were created to address each module topic. Through team work and whole-class discussion, a collaborative learning environment was established. Students additionally learned concepts such vendor relations, financial negotiations, capital project management, and competitive strategy. Conclusion: Case Method Teaching approach is an effective teaching tool to further enhance the learning experience of radiation oncology physics students by presenting them with though-provoking dilemmas that require students to distinguish pertinent from peripheral information, formulate strategies and recommendations for action, and confront obstacles to implementation.« less

  2. Factors Affecting the Design of Slow Release Formulations of Herbicides Based on Clay-Surfactant Systems. A Methodological Approach

    PubMed Central

    Galán-Jiménez, María del Carmen; Mishael, Yael-Golda; Nir, Shlomo; Morillo, Esmeralda; Undabeytia, Tomás

    2013-01-01

    A search for clay-surfactant based formulations with high percentage of the active ingredient, which can yield slow release of active molecules is described. The active ingredients were the herbicides metribuzin (MZ), mesotrione (MS) and flurtamone (FL), whose solubilities were examined in the presence of four commercial surfactants; (i) neutral: two berols (B048, B266) and an alkylpolyglucoside (AG6202); (ii) cationic: an ethoxylated amine (ET/15). Significant percent of active ingredient (a.i.) in the clay/surfactant/herbicide formulations could be achieved only when most of the surfactant was added as micelles. MZ and FL were well solubilized by berols, whereas MS by ET/15. Sorption of surfactants on the clay mineral sepiolite occurred mostly by sorption of micelles, and the loadings exceeded the CEC. Higher loadings were determined for B266 and ET/15. The sorption of surfactants was modeled by using the Langmuir-Scatchard equation which permitted the determination of binding coefficients that could be used for further predictions of the sorbed amounts of surfactants under a wide range of clay/surfactant ratios. A possibility was tested of designing clay-surfactant based formulations of certain herbicides by assuming the same ratio between herbicides and surfactants in the formulations as for herbicides incorporated in micelles in solution. Calculations indicated that satisfactory FL formulations could not be synthesized. The experimental fractions of herbicides in the formulations were in agreement with the predicted ones for MS and MZ. The validity of this approach was confirmed in in vitro release tests that showed a slowing down of the release of a.i. from the designed formulations relative to the technical products. Soil dissipation studies with MS formulations also showed improved bioactivity of the clay-surfactant formulation relative to the commercial one. This methodological approach can be extended to other clay-surfactant systems for encapsulation and slow release of target molecules of interest. PMID:23527087

  3. Development of an ANN optimized mucoadhesive buccal tablet containing flurbiprofen and lidocaine for dental pain.

    PubMed

    Hussain, Amjad; Syed, Muhammad Ali; Abbas, Nasir; Hanif, Sana; Arshad, Muhammad Sohail; Bukhari, Nadeem Irfan; Hussain, Khalid; Akhlaq, Muhammad; Ahmad, Zeeshan

    2016-06-01

    A novel mucoadhesive buccal tablet containing flurbiprofen (FLB) and lidocaine HCl (LID) was prepared to relieve dental pain. Tablet formulations (F1-F9) were prepared using variable quantities of mucoadhesive agents, hydroxypropyl methyl cellulose (HPMC) and sodium alginate (SA). The formulations were evaluated for their physicochemical properties, mucoadhesive strength and mucoadhesion time, swellability index and in vitro release of active agents. Release of both drugs depended on the relative ratio of HPMC:SA. However, mucoadhesive strength and mucoadhesion time were better in formulations, containing higher proportions of HPMC compared to SA. An artificial neural network (ANN) approach was applied to optimise formulations based on known effective parameters (i.e., mucoadhesive strength, mucoadhesion time and drug release), which proved valuable. This study indicates that an effective buccal tablet formulation of flurbiprofen and lidocaine can be prepared via an optimized ANN approach.

  4. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  5. Criteria for applicability of the impulse approach to collisions

    NASA Astrophysics Data System (ADS)

    Sharma, Ramesh D.; Bakshi, Pradip M.; Sindoni, Joseph M.

    1990-06-01

    Using an exact formulation of impulse approach (IA) to atom-diatom collisions, we assess its internal consistency. By comparing the cross sections in the forward and reverse directions for the vibrational-rotational inelastic processes, using the half-on-the-shell (post and prior) models of the two-body t matrix, we show that in both cases the IA leads to a violation of the semidetailed balance (SDB) condition for small scattering angles. An off-shell model for the two-body t matrix, which preserves SDB, is shown to have other serious shortcomings. The cross sections are studied quantitatively as a function of the relative translational energy and the mass of the incident particle, and criteria discussed for the applicability of IA.

  6. Unique Approach to Dental Management of Children with Hearing Impairment.

    PubMed

    Renahan, Navanith; Varma, R Balagopal; Kumaran, Parvathy; Xavier, Arun M

    2017-01-01

    The number of deaf children has dramatically increased in the past few decades. These children present to the pediatric dentist a unique set of challenges mostly pertaining to the establishment of communication with them. There have been very few attempts in the past to break down these challenges and formulate a strategy on how to manage them effectively. This is a case report of a child who was successfully managed using two different modes of communication. Finally, the advantages and disadvantages are mentioned, and a common strategy incorporating the positives of both the methods has been devised. Renahan N, Varma RB, Kumaran P, Xavier AM. Unique Approach to Dental Management of Children with Hearing Impairment. Int J Clin Pediatr Dent 2017;10(1):107-110.

  7. Implementing a Trauma-Informed Model of Care in a Community Acute Mental Health Team.

    PubMed

    Moloney, Bill; Cameron, Ian; Baker, Ashley; Feeney, Johanna; Korner, Anthony; Kornhaber, Rachel; Cleary, Michelle; McLean, Loyola

    2018-04-12

    In this paper, we demonstrate the value of implementing a Trauma-Informed Model of Care in a Community Acute Mental Health Team by providing brief intensive treatment (comprising risk interventions, brief counselling, collaborative formulation and pharmacological treatment). The team utilised the Conversational Model (CM), a psychotherapeutic approach for complex trauma. Key features of the CM are described in this paper using a clinical case study. The addition of the Conversational Model approach to practice has enabled better understandings of consumers' capacities and ways to then engage, converse, and intervene. The implementation of this intervention has led to a greater sense of self-efficacy amongst clinicians, who can now articulate a clear counselling model of care.

  8. Drug carrier systems for solubility enhancement of BCS class II drugs: a critical review.

    PubMed

    Kumar, Sumit; Bhargava, Deepak; Thakkar, Arti; Arora, Saahil

    2013-01-01

    Poor aqueous solubility impedes a drug's bioavailability and challenges its pharmaceutical development. Pharmaceutical development of drugs with poor water solubility requires the establishment of a suitable formulation layout among various techniques. Various approaches have been investigated extensively to improve the aqueous solubility and poor dissolution rate of BCS class II and IV drugs. In this literature review, novel formulation options, particularly for class II drugs designed for applications such as micronization, self-emulsification, cyclodextrin complexation, co-crystallisation, super critical fluid technology, solubilisation by change in pH, salt formation, co-solvents, melt granulation, and solid dispersion, liposomal/niosomal formulations, are discussed in detail to introduce biopharmaceutical challenges and recent approaches to facilitate more efficient drug formulation and development.

  9. Problem formulation in the environmental risk assessment for genetically modified plants

    PubMed Central

    Wolt, Jeffrey D.; Keese, Paul; Raybould, Alan; Burachik, Moisés; Gray, Alan; Olin, Stephen S.; Schiemann, Joachim; Sears, Mark; Wu, Felicia

    2009-01-01

    Problem formulation is the first step in environmental risk assessment (ERA) where policy goals, scope, assessment endpoints, and methodology are distilled to an explicitly stated problem and approach for analysis. The consistency and utility of ERAs for genetically modified (GM) plants can be improved through rigorous problem formulation (PF), producing an analysis plan that describes relevant exposure scenarios and the potential consequences of these scenarios. A properly executed PF assures the relevance of ERA outcomes for decision-making. Adopting a harmonized approach to problem formulation should bring about greater uniformity in the ERA process for GM plants among regulatory regimes globally. This paper is the product of an international expert group convened by the International Life Sciences Institute (ILSI) Research Foundation. PMID:19757133

  10. Functional renormalization group approach to SU(N ) Heisenberg models: Real-space renormalization group at arbitrary N

    NASA Astrophysics Data System (ADS)

    Buessen, Finn Lasse; Roscher, Dietrich; Diehl, Sebastian; Trebst, Simon

    2018-02-01

    The pseudofermion functional renormalization group (pf-FRG) is one of the few numerical approaches that has been demonstrated to quantitatively determine the ordering tendencies of frustrated quantum magnets in two and three spatial dimensions. The approach, however, relies on a number of presumptions and approximations, in particular the choice of pseudofermion decomposition and the truncation of an infinite number of flow equations to a finite set. Here we generalize the pf-FRG approach to SU (N )-spin systems with arbitrary N and demonstrate that the scheme becomes exact in the large-N limit. Numerically solving the generalized real-space renormalization group equations for arbitrary N , we can make a stringent connection between the physically most significant case of SU(2) spins and more accessible SU (N ) models. In a case study of the square-lattice SU (N ) Heisenberg antiferromagnet, we explicitly demonstrate that the generalized pf-FRG approach is capable of identifying the instability indicating the transition into a staggered flux spin liquid ground state in these models for large, but finite, values of N . In a companion paper [Roscher et al., Phys. Rev. B 97, 064416 (2018), 10.1103/PhysRevB.97.064416] we formulate a momentum-space pf-FRG approach for SU (N ) spin models that allows us to explicitly study the large-N limit and access the low-temperature spin liquid phase.

  11. Formulation design for poorly water-soluble drugs based on biopharmaceutics classification system: basic approaches and practical applications.

    PubMed

    Kawabata, Yohei; Wada, Koichi; Nakatani, Manabu; Yamada, Shizuo; Onoue, Satomi

    2011-11-25

    The poor oral bioavailability arising from poor aqueous solubility should make drug research and development more difficult. Various approaches have been developed with a focus on enhancement of the solubility, dissolution rate, and oral bioavailability of poorly water-soluble drugs. To complete development works within a limited amount of time, the establishment of a suitable formulation strategy should be a key consideration for the pharmaceutical development of poorly water-soluble drugs. In this article, viable formulation options are reviewed on the basis of the biopharmaceutics classification system of drug substances. The article describes the basic approaches for poorly water-soluble drugs, such as crystal modification, micronization, amorphization, self-emulsification, cyclodextrin complexation, and pH modification. Literature-based examples of the formulation options for poorly water-soluble compounds and their practical application to marketed products are also provided. Classification of drug candidates based on their biopharmaceutical properties can provide an indication of the difficulty of drug development works. A better understanding of the physicochemical and biopharmaceutical properties of drug substances and the limitations of each delivery option should lead to efficient formulation development for poorly water-soluble drugs. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  13. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  14. Hamiltonian formulation of Palatini f(R) theories a la Brans-Dicke theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olmo, Gonzalo J.; Sanchis-Alepuz, Helios; Institut fuer Physik, Karl-Franzens-Universitaet Graz

    2011-05-15

    We study the Hamiltonian formulation of f(R) theories of gravity both in metric and in Palatini formalism using their classical equivalence with Brans-Dicke theories with a nontrivial potential. The Palatini case, which corresponds to the {omega}=-3/2 Brans-Dicke theory, requires special attention because of new constraints associated with the scalar field, which is nondynamical. We derive, compare, and discuss the constraints and evolution equations for the {omega}=-3/2 and {omega}{ne}-3/2 cases. Based on the properties of the constraint and evolution equations, we find that, contrary to certain claims in the literature, the Cauchy problem for the {omega}=-3/2 case is well formulated andmore » there is no reason to believe that it is not well posed in general.« less

  15. Enhancing the ecological risk assessment process.

    PubMed

    Dale, Virginia H; Biddinger, Gregory R; Newman, Michael C; Oris, James T; Suter, Glenn W; Thompson, Timothy; Armitage, Thomas M; Meyer, Judith L; Allen-King, Richelle M; Burton, G Allen; Chapman, Peter M; Conquest, Loveday L; Fernandez, Ivan J; Landis, Wayne G; Master, Lawrence L; Mitsch, William J; Mueller, Thomas C; Rabeni, Charles F; Rodewald, Amanda D; Sanders, James G; van Heerden, Ivor L

    2008-07-01

    The Ecological Processes and Effects Committee of the US Environmental Protection Agency Science Advisory Board conducted a self-initiated study and convened a public workshop to characterize the state of the ecological risk assessment (ERA), with a view toward advancing the science and application of the process. That survey and analysis of ERA in decision making shows that such assessments have been most effective when clear management goals were included in the problem formulation; translated into information needs; and developed in collaboration with decision makers, assessors, scientists, and stakeholders. This process is best facilitated when risk managers, risk assessors, and stakeholders are engaged in an ongoing dialogue about problem formulation. Identification and acknowledgment of uncertainties that have the potential to profoundly affect the results and outcome of risk assessments also improves assessment effectiveness. Thus we suggest 1) through peer review of ERAs be conducted at the problem formulation stage and 2) the predictive power of risk-based decision making be expanded to reduce uncertainties through analytical and methodological approaches like life cycle analysis. Risk assessment and monitoring programs need better integration to reduce uncertainty and to evaluate risk management decision outcomes. Postdecision audit programs should be initiated to evaluate the environmental outcomes of risk-based decisions. In addition, a process should be developed to demonstrate how monitoring data can be used to reduce uncertainties. Ecological risk assessments should include the effects of chemical and nonchemical stressors at multiple levels of biological organization and spatial scale, and the extent and resolution of the pertinent scales and levels of organization should be explicitly considered during problem formulation. An approach to interpreting lines of evidence and weight of evidence is critically needed for complex assessments, and it would be useful to develop case studies and/or standards of practice for interpreting lines of evidence. In addition, tools for cumulative risk assessment should be developed because contaminants are often released into stressed environments.

  16. Time-dependent theoretical treatments of the dynamics of electrons and nuclei in molecular systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deumens, E.; Diz, A.; Longo, R.

    1994-07-01

    An overview is presented of methods for time-dependent treatments of molecules as systems of electrons and nuclei. The theoretical details of these methods are reviewed and contrasted in the light of a recently developed time-dependent method called electron-nuclear dynamics. Electron-nuclear dynamics (END) is a formulation of the complete dynamics of electrons and nuclei of a molecular system that eliminates the necessity of constructing potential-energy surfaces. Because of its general formulation, it encompasses many aspects found in other formulations and can serve as a didactic device for clarifying many of the principles and approximations relevant in time-dependent treatments of molecular systems.more » The END equations are derived from the time-dependent variational principle applied to a chosen family of efficiently parametrized approximate state vectors. A detailed analysis of the END equations is given for the case of a single-determinantal state for the electrons and a classical treatment of the nuclei. The approach leads to a simple formulation of the fully nonlinear time-dependent Hartree-Fock theory including nuclear dynamics. The nonlinear END equations with the [ital ab] [ital initio] Coulomb Hamiltonian have been implemented at this level of theory in a computer program, ENDyne, and have been shown feasible for the study of small molecular systems. Implementation of the Austin Model 1 semiempirical Hamiltonian is discussed as a route to large molecular systems. The linearized END equations at this level of theory are shown to lead to the random-phase approximation for the coupled system of electrons and nuclei. The qualitative features of the general nonlinear solution are analyzed using the results of the linearized equations as a first approximation. Some specific applications of END are presented, and the comparison with experiment and other theoretical approaches is discussed.« less

  17. Novel Approaches in Formulation of Entomopathogenic Fungi for Control of Insects in Soil, Foliar, and Structural Habitats: Thinking Outside the Box and Expecting the Unexpected

    USDA-ARS?s Scientific Manuscript database

    By and large, mycoinsecticide formulations have involved sprayable products, typically oil flowables, emulsifiable suspensions, wettable powders, and water dispersable granules. Various nutritive or inert carriers have been used to create granular formulations for use against soil pests. Sometime...

  18. Advanced Amine Solvent Formulations and Process Integration for Near-Term CO2 Capture Success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Kevin S.; Searcy, Katherine; Rochelle, Gary T.

    2007-06-28

    This Phase I SBIR project investigated the economic and technical feasibility of advanced amine scrubbing systems for post-combustion CO2 capture at coal-fired power plants. Numerous combinations of advanced solvent formulations and process configurations were screened for energy requirements, and three cases were selected for detailed analysis: a monoethanolamine (MEA) base case and two “advanced” cases: an MEA/Piperazine (PZ) case, and a methyldiethanolamine (MDEA) / PZ case. The MEA/PZ and MDEA/PZ cases employed an advanced “double matrix” stripper configuration. The basis for calculations was a model plant with a gross capacity of 500 MWe. Results indicated that CO2 capture increased themore » base cost of electricity from 5 cents/kWh to 10.7 c/kWh for the MEA base case, 10.1 c/kWh for the MEA / PZ double matrix, and 9.7 c/kWh for the MDEA / PZ double matrix. The corresponding cost per metric tonne CO2 avoided was 67.20 $/tonne CO2, 60.19 $/tonne CO2, and 55.05 $/tonne CO2, respectively. Derated capacities, including base plant auxiliary load of 29 MWe, were 339 MWe for the base case, 356 MWe for the MEA/PZ double matrix, and 378 MWe for the MDEA / PZ double matrix. When compared to the base case, systems employing advanced solvent formulations and process configurations were estimated to reduce reboiler steam requirements by 20 to 44%, to reduce derating due to CO2 capture by 13 to 30%, and to reduce the cost of CO2 avoided by 10 to 18%. These results demonstrate the potential for significant improvements in the overall economics of CO2 capture via advanced solvent formulations and process configurations.« less

  19. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dannberg, Juliane; Dohmen, Janik; Kalousova, Klara; Maurice, Maxim; Noack, Lena; Plesa, Ana; Soucek, Ondrej; Spiegelman, Marc; Thieulot, Cedric; Tosi, Nicola; Wallner, Herbert

    2016-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we carry out a benchmark comparison. The reference model is taken from the mantle convection benchmark, cases 1a to 1c (Blankenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and Rayleigh numbers of 104 to 10^6. Melting is modelled using a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) five cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 is identical to case 1 except that latent heat is switched on. Case 3 includes batch melting, melt buoyancy (melt Rayleigh number Rm) and depletion buoyancy, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms), the maximum and the total melt volume and qm approaching a statistical steady state. Case 4 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases are carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. For cases 1 - 3 very good agreement is achieved among the various participating codes. For case 4 melting/freezing formulations require some attention to avoid sub-solidus melt fractions. A case 5 is planned where all melt will be extracted and, reinserted in a shallow region above the melted plume. The motivation of this presentation is to summarize first experiences and to finalize the case definitions. References: Blankenbach, B., Busse, F., Christensen, U., Cserepes, L. Gunkel, D., Hansen, U., Harder, H. Jarvis, G., Koch, M., Marquart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geophys., 98, 23-38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  20. Psychological therapy for psychogenic amnesia: Successful treatment in a single case study.

    PubMed

    Cassel, Anneli; Humphreys, Kate

    2016-01-01

    Psychogenic amnesia is widely understood to be a memory impairment of psychological origin that occurs as a response to severe stress. However, there is a paucity of evidence regarding the effectiveness of psychological therapy approaches in the treatment of this disorder. The current article describes a single case, "Ben", who was treated with formulation-driven psychological therapy using techniques drawn from cognitive behavioural therapy (CBT) and acceptance and commitment therapy (ACT) for psychogenic amnesia. Before treatment, Ben exhibited isolated retrograde and anterograde memory impairments. He received 12 therapy sessions that targeted experiential avoidance followed by two review sessions, six weeks and five months later. Ben's retrograde and anterograde memory impairments improved following therapy to return to within the "average" to "superior" ranges, which were maintained at follow-up. Further experimental single case study designs and larger group studies are required to advance the understanding of the effectiveness and efficacy of psychological therapy for psychogenic amnesia.

  1. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  2. Robust optimization of front members in a full frontal car impact

    NASA Astrophysics Data System (ADS)

    Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar

    2013-03-01

    In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.

  3. Probabilistic track coverage in cooperative sensor networks.

    PubMed

    Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A

    2010-12-01

    The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.

  4. Moving mode shape function approach for spinning disk and asymmetric disc brake squeal

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyoung

    2018-06-01

    The solution approach of an asymmetric spinning disk under stationary friction loads requires the mode shape function fixed in the disk in the assumed mode method when the equations of motion is described in the space-fixed frame. This model description will be termed the 'moving mode shape function approach' and it allows us to formulate the stationary contact load problem in both the axisymmetric and asymmetric disk cases. Numerical results show that the eigenvalues of the time-periodic axisymmetric disk system are time-invariant. When the axisymmetry of the disk is broken, the positive real parts of the eigenvalues highly vary with the rotation of the disk in the slow speeds in such application as disc brake squeal. By using the Floquet stability analysis, it is also shown that breaking the axisymmetry of the disc alters the stability boundaries of the system.

  5. Can dosage form-dependent food effects be predicted using biorelevant dissolution tests? Case example extended release nifedipine.

    PubMed

    Andreas, Cord J; Tomaszewska, Irena; Muenster, Uwe; van der Mey, Dorina; Mueck, Wolfgang; Dressman, Jennifer B

    2016-08-01

    Food intake is known to have various effects on gastrointestinal luminal conditions in terms of transit times, hydrodynamic forces and/or luminal fluid composition and can therefore affect the dissolution behavior of solid oral dosage forms. The aim of this study was to investigate and detect the dosage form-dependent food effect that has been observed for two extended-release formulations of nifedipine using in vitro dissolution tests. Two monolithic extended release formulations, the osmotic pump Adalat® XL 60mg and matrix-type Adalat® Eins 30mg formulation, were investigated with biorelevant dissolution methods using the USP apparatus III and IV under both simulated prandial states, and their corresponding quality control dissolution method. In vitro data were compared to published and unpublished in vivo data using deconvolution-based in vitro - in vivo correlation (IVIVC) approaches. Quality control dissolution methods tended to overestimate the dissolution rate due to the excessive solubilizing capabilities of the sodium dodecyl sulfate (SDS)-containing dissolution media. Using Level II biorelevant media the dosage form dependent food effect for nifedipine was described well when studied with the USP apparatus III, whereas the USP apparatus IV failed to detect the positive food effect for the matrix-type dosage form. It was demonstrated that biorelevant methods can serve as a useful tool during formulation development as they were able to qualitatively reflect the in vivo data. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Urban Principle of Water Sensitive Design in Kampung Kamboja at Pontianak City

    NASA Astrophysics Data System (ADS)

    Hasriyanti, N.; Ryanti, E.

    2017-07-01

    This study will define the design principles of settlement area banks of the Kapuas Pontianak to approach the concept of water sensitive urban design (WSUD) in densely populated residential areas. Using a case study of a region densely located on the banks of the river with engineering literature to formulate the aspects taken into consideration and the components are arranged in the design, analysis descriptive paradigm rationalistic to identify the characteristics of residential areas riverbank with consideration of elements WSUD and formulate design principles residential area that is sensitive to water. This research is important to do because of problems related to the water management system in the settlement bank of the river in the city of Pontianak do not maximize. So that the primacy of this study contains several objectives to be achieved is to identify the characteristics of the settlement area riverbanks under consideration aspects areas design that is sensitive to water and principle areas design that will formulate the structure of the existing problems related to the needs of the community infrastructure facilities infrastructure neighborhoods and formulate and create guidelines for appropriate technology for integrated water management systems in the residential area of the riverbank and engineering design for the settlements are sensitive to water (WSUD). The final aim of the study is expected to achieve water management systems in residential areas by utilizing the abundant rainwater availability by using LID (Low Impact Development) through the concept of urban design that sensitive water

  7. Dynamic Target Definition: a novel approach for PTV definition in ion beam therapy.

    PubMed

    Cabal, Gonzalo A; Jäkel, Oliver

    2013-05-01

    To present a beam arrangement specific approach for PTV definition in ion beam therapy. By means of a Monte Carlo error propagation analysis a criteria is formulated to assess whether a voxel is safely treated. Based on this a non-isotropical expansion rule is proposed aiming to minimize the impact of uncertainties on the dose delivered. The method is exemplified in two cases: a Head and Neck case and a Prostate case. In both cases the modality used is proton beam irradiation and the sources of uncertainties taken into account are positioning (set up) errors and range uncertainties. It is shown how different beam arrangements have an impact on plan robustness which leads to different target expansions necessary to assure a predefined level of plan robustness. The relevance of appropriate beam angle arrangements as a way to minimize uncertainties is demonstrated. A novel method for PTV definition in on beam therapy is presented. The method show promising results by improving the probability of correct dose CTV coverage while reducing the size of the PTV volume. In a clinical scenario this translates into an enhanced tumor control probability while reducing the volume of healthy tissue being irradiated. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. iCBLS: An interactive case-based learning system for medical education.

    PubMed

    Ali, Maqbool; Han, Soyeon Caren; Bilal, Hafiz Syed Muhammad; Lee, Sungyoung; Kang, Matthew Jee Yun; Kang, Byeong Ho; Razzaq, Muhammad Asif; Amin, Muhammad Bilal

    2018-01-01

    Medical students should be able to actively apply clinical reasoning skills to further their interpretative, diagnostic, and treatment skills in a non-obtrusive and scalable way. Case-Based Learning (CBL) approach has been receiving attention in medical education as it is a student-centered teaching methodology that exposes students to real-world scenarios that need to be solved using their reasoning skills and existing theoretical knowledge. In this paper, we propose an interactive CBL System, called iCBLS, which supports the development of collaborative clinical reasoning skills for medical students in an online environment. The iCBLS consists of three modules: (i) system administration (SA), (ii) clinical case creation (CCC) with an innovative semi-automatic approach, and (iii) case formulation (CF) through intervention of medical students' and teachers' knowledge. Two evaluations under the umbrella of the context/input/process/product (CIPP) model have been performed with a Glycemia study. The first focused on the system satisfaction, evaluated by 54 students. The latter aimed to evaluate the system effectiveness, simulated by 155 students. The results show a high success rate of 70% for students' interaction, 76.4% for group learning, 72.8% for solo learning, and 74.6% for improved clinical skills. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Integrated structure/control law design by multilevel optimization

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.; Schmidt, David K.

    1989-01-01

    A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.

  10. On process optimization considering LCA methodology.

    PubMed

    Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío

    2012-04-15

    The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A Variational Formulation of Macro-Particle Algorithms for Kinetic Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Shadwick, B. A.

    2013-10-01

    Macro-particle based simulations methods are in widespread use in plasma physics; their computational efficiency and intuitive nature are largely responsible for their longevity. In the main, these algorithms are formulated by approximating the continuous equations of motion. For systems governed by a variational principle (such as collisionless plasmas), approximations of the equations of motion is known to introduce anomalous behavior, especially in system invariants. We present a variational formulation of particle algorithms for plasma simulation based on a reduction of the distribution function onto a finite collection of macro-particles. As in the usual Particle-In-Cell (PIC) formulation, these macro-particles have a definite momentum and are spatially extended. The primary advantage of this approach is the preservation of the link between symmetries and conservation laws. For example, nothing in the reduction introduces explicit time dependence to the system and, therefore, the continuous-time equations of motion exactly conserve energy; thus, these models are free of grid-heating. In addition, the variational formulation allows for constructing models of arbitrary spatial and temporal order. In contrast, the overall accuracy of the usual PIC algorithm is at most second due to the nature of the force interpolation between the gridded field quantities and the (continuous) particle position. Again in contrast to the usual PIC algorithm, here the macro-particle shape is arbitrary; the spatial extent is completely decoupled from both the grid-size and the ``smoothness'' of the shape; smoother particle shapes are not necessarily larger. For simplicity, we restrict our discussion to one-dimensional, non-relativistic, un-magnetized, electrostatic plasmas. We comment on the extension to the electromagnetic case. Supported by the US DoE under contract numbers DE-FG02-08ER55000 and DE-SC0008382.

  12. Resonance spectra of a paramagnetic probe dissolved in a viscous medium

    NASA Technical Reports Server (NTRS)

    Kaplan, J. I.; Gelerinter, E.; Fryburg, G. C.

    1972-01-01

    A model is presented for calculating the paramagnetic resonance (EPR) spectrum of vanadyl acetylacetonate (VAAC) dissolved in either a liquid crystal or isotropic solvent. It employs density matrix formulation in the rotating reference frame. The molecules occupy several discrete angles with respect to the magnetic field and can relax to neighboring positions in a characteristic time tau(theta). The form of tau(theta) is found from a diffusion approach, and the magnitude of tau(theta) is a measure of how freely the VAAC probe tumbles in the solvent. Spectra are predicted for values of tau between 10 to the minus 11th power sec and 10 to the minus 7th power sec. The EPR spectrum, in the isotropic case, is obtained be summing the contributions from the allowed angles weighted by the polar volume element, sin theta. When applying the model to the nematic liquid crystal case it is also necessary to multiply by the Saupe distribution function. For this case tau(theta) is obtained from the diffusion approach in which two diffusion constants are employed to reflect the difference in the parallel and perpendicular components of the viscosity.

  13. The logic of counterfactual analysis in case-study explanation.

    PubMed

    Mahoney, James; Barrenechea, Rodrigo

    2017-12-19

    In this paper, we develop a set-theoretic and possible worlds approach to counterfactual analysis in case-study explanation. Using this approach, we first consider four kinds of counterfactuals: necessary condition counterfactuals, SUIN condition counterfactuals, sufficient condition counterfactuals, and INUS condition counterfactuals. We explore the distinctive causal claims entailed in each, and conclude that necessary condition and SUIN condition counterfactuals are the most useful types for hypothesis assessment in case-study research. We then turn attention to the development of a rigorous understanding of the 'minimal-rewrite' rule, linking this rule to insights from set theory about the relative importance of necessary conditions. We show why, logically speaking, a comparative analysis of two necessary condition counterfactuals will tend to favour small events and contingent happenings. A third section then presents new tools for specifying the level of generality of the events in a counterfactual. We show why and how the goals of formulating empirically important versus empirically plausible counterfactuals stand in tension with one another. Finally, we use our framework to link counterfactual analysis to causal sequences, which in turn provides advantages for conducting counterfactual projections. © London School of Economics and Political Science 2017.

  14. Ab Initio Theory of Nuclear Magnetic Resonance Shifts in Metals

    NASA Astrophysics Data System (ADS)

    D'Avezac, Mayeul; Marzari, Nicola; Mauri, Francesco

    2005-03-01

    A comprehensive approach for the first-principles determination of all-electron NMR shifts in metallic systems is presented. Our formulation is based on a combination of density-functional perturbation theory and all-electron wavefunction reconstruction, starting from periodic-boundary calculations in the pseudopotential approximation. The orbital contribution to the NMR shift (the chemical shift) is obtained by combining the gauge-including projector augmented-wave approach (GIPAW), originally developed for the case of insulatorsootnotetextC. J. Pickard, Francesco Mauri, Phys. Rev. B, 63, 245101(2001), with the extension of linear-response theory to the case of metallic systemsootnotetextS. de Gironcoli, Phys. Rev. B, 51, 6773(1995). The spin contribution (the Knight shift) is obtained as a response to a finite uniform magnetic field, and through reconstructing the hyperfine interaction between the electron-spin density and the nuclear spins with the projector augmented-wave method (PAWootnotetextC. G. Van de Walle, P. E. Blöchl, Phys. Rev. B, 47, 4244(1993)). Our method is validated with applications to the case of the homogeneous electron gas and of simple metals. (Work supported by MURI grant DAAD 19-03-1-0169 and MIT-France)

  15. Non-equilibrium reactive flux: A unified framework for slow and fast reaction kinetics.

    PubMed

    Bose, Amartya; Makri, Nancy

    2017-10-21

    The flux formulation of reaction rate theory is recast in terms of the expectation value of the reactive flux with an initial condition that corresponds to a non-equilibrium, factorized reactant density. In the common case of slow reactive processes, the non-equilibrium expression reaches the plateau regime only slightly slower than the equilibrium flux form. When the reactants are described by a single quantum state, as in the case of electron transfer reactions, the factorized reactant density describes the true initial condition of the reactive process. In such cases, the time integral of the non-equilibrium flux expression yields the reactant population as a function of time, allowing characterization of the dynamics in cases where there is no clear separation of time scales and thus a plateau regime cannot be identified. The non-equilibrium flux offers a unified approach to the kinetics of slow and fast chemical reactions and is ideally suited to mixed quantum-classical methods.

  16. Saving two birds with one stone: using active substance avian acute toxicity data to predict formulated plant protection product toxicity.

    PubMed

    Maynard, Samuel K; Edwards, Peter; Wheeler, James R

    2014-07-01

    Environmental safety assessments for exposure of birds require the provision of acute avian toxicity data for both the pesticidal active substance and formulated products. As an example, testing on the formulated product is waived in Europe using an assessment of data for the constituent active substance(s). This is often not the case globally, because some countries require acute toxicity tests with every formulated product, thereby triggering animal welfare concerns through unnecessary testing. A database of 383 formulated products was compiled from acute toxicity studies conducted with northern bobwhite (Colinus virginianus) or Japanese quail (Coturnix japonica) (unpublished regulatory literature). Of the 383 formulated products studied, 159 contained only active substances considered functionally nontoxic (median lethal dose [LD50] > highest dose tested). Of these, 97% had formulated product LD50 values of >2000 mg formulated product/kg (limit dose), indicating that no new information was obtained in the formulated product study. Furthermore, defined (point estimated) LD50 values for formulated products were compared with LD50 values predicted from toxicity of the active substance(s). This demonstrated that predicted LD50 values were within 2-fold and 5-fold of the measured formulated product LD50 values in 90% and 98% of cases, respectively. This analysis demonstrates that avian acute toxicity testing of formulated products is largely unnecessary and should not be routinely required to assess avian acute toxicity. In particular, when active substances are known to be functionally nontoxic, further formulated product testing adds no further information and unnecessarily increases bird usage in testing. A further analysis highlights the fact that significant reductions (61% in this dataset) could be achieved by using a sequential testing design (Organisation for Economic Co-operation and Development test guideline 223), as opposed to established single-stage designs. © 2014 The Authors.

  17. Glass Property Models, Constraints, and Formulation Approaches for Vitrification of High-Level Nuclear Wastes at the US Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong-Sang

    2015-03-02

    The legacy nuclear wastes stored in underground tanks at the US Department of Energy’s Hanford site is planned to be separated into high-level waste and low-activity waste fractions and vitrified separately. Formulating optimized glass compositions that maximize the waste loading in glass is critical for successful and economical treatment and immobilization of nuclear wastes. Glass property-composition models have been developed and applied to formulate glass compositions for various objectives for the past several decades. The property models with associated uncertainties and combined with composition and property constraints have been used to develop preliminary glass formulation algorithms designed for vitrification processmore » control and waste form qualification at the planned waste vitrification plant. This paper provides an overview of current status of glass property-composition models, constraints applicable to Hanford waste vitrification, and glass formulation approaches that have been developed for vitrification of hazardous and highly radioactive wastes stored at the Hanford site.« less

  18. Toward Improved CFD Predictions of Slender Airframe Aerodynamics Using the F-16XL Aircraft (CAWAPI-2)

    NASA Technical Reports Server (NTRS)

    Luckring, James M.; Rizzi, Arthur; Davis, M. Bruce

    2014-01-01

    A coordinated project has been underway to improve CFD predictions of slender airframe aerodynamics. The work is focused on two flow conditions and leverages a unique flight data set obtained with an F-16XL aircraft. These conditions, a low-speed high angleof- attack case and a transonic low angle-of-attack case, were selected from a prior prediction campaign wherein the CFD failed to provide acceptable results. In this paper the background, objectives and approach to the current project are presented. The work embodies predictions from multiple numerical formulations that are contributed from multiple organizations, and the context of this campaign to other multi-code, multiorganizational efforts is included. The relevance of this body of work toward future supersonic commercial transport concepts is also briefly addressed.

  19. The effect of bottom boundary condition type on the behavior of adhesive contact of spherical probe on an elastic film

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Xu, W.

    2017-11-01

    This study presents an investigation on the behavior of adhesive contact between a rigid sphere and an elastic film which is either perfectly bonded (case I) or in frictionless contact (case II) with a rigid substrate. By using linear fracture mechanics, we formulate an convenient semi-analytical approach to develop relations between the applied force, penetration depth and contact radius. Finite element analysis (FEA) is used to verify the relationships. Our results reveal that the interfacial boundary conditions between the film and substrate have distinct effects on the adhesive contact behavior between the sphere and the film. The aim of the present study is to provide an instructive inspiration for controlling adhesion strength of the thin film subject to adhesive contact.

  20. Modeling of the interfacial separation work in relation to impurity concentration in adjoining materials

    NASA Astrophysics Data System (ADS)

    Alekseev, Ilia M.; Makhviladze, Tariel M.; Minushev, Airat Kh.; Sarychev, Mikhail E.

    2009-10-01

    On the basis of the general thermodynamic approach developed in a model describing the influence of point defects on the separation work at an interface of solid materials is developed. The kinetic equations describing the defect exchange between the interface and the material bulks are formulated. The model have been applied to the case when joined materials contain such point defects as impurity atoms (interstitial and substitutional), concretized the main characteristic parameters required for a numerical modeling as well as clarified their domains of variability. The results of the numerical modeling concerning the dependences on impurity concentrations and the temperature dependences are obtained and analyzed. Particularly, the effects of interfacial strengthening and adhesion incompatibility predicted analytically for the case of impurity atoms are verified and analyzed.

  1. Modeling of the interfacial separation work in relation to impurity concentration in adjoining materials

    NASA Astrophysics Data System (ADS)

    Alekseev, Ilia M.; Makhviladze, Tariel M.; Minushev, Airat Kh.; Sarychev, Mikhail E.

    2010-02-01

    On the basis of the general thermodynamic approach developed in a model describing the influence of point defects on the separation work at an interface of solid materials is developed. The kinetic equations describing the defect exchange between the interface and the material bulks are formulated. The model have been applied to the case when joined materials contain such point defects as impurity atoms (interstitial and substitutional), concretized the main characteristic parameters required for a numerical modeling as well as clarified their domains of variability. The results of the numerical modeling concerning the dependences on impurity concentrations and the temperature dependences are obtained and analyzed. Particularly, the effects of interfacial strengthening and adhesion incompatibility predicted analytically for the case of impurity atoms are verified and analyzed.

  2. Multicomponent phase-field model for extremely large partition coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welland, Michael J.; Wolf, Dieter; Guyer, Jonathan E.

    2014-01-01

    We develop a multicomponent phase-field model specially formulated to robustly simulate concentration variations from molar to atomic magnitudes across an interlace, i.e., partition coefficients in excess of 10±23 such as may be the case with species which are predominant in one phase and insoluble in the other. Substitutional interdiffusion on a normal lattice and concurrent interstitial diffusion are included. The composition in the interlace follows the approach of Kim. Kim, and Suzuki [Phys. Rev. E 60, 7186 (1999)] and is compared to that of Wheeler, Boettinger, and McFadden [Phys. Rev. A 45, 7424 (1992)] in the context of large partitioning.more » The model successfully reproduces analytical solutions for binary diffusion couples and solute trapping for the demonstrated cases of extremely large partitioning.« less

  3. Fast model updating coupling Bayesian inference and PGD model reduction

    NASA Astrophysics Data System (ADS)

    Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic

    2018-04-01

    The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.

  4. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  5. A geometrically exact formulation for three-dimensional numerical simulation of the umbilical cable in a deep-sea ROV system

    NASA Astrophysics Data System (ADS)

    Quan, Wei-cai; Zhang, Zhu-ying; Zhang, Ai-qun; Zhang, Qi-feng; Tian, Yu

    2015-04-01

    This paper proposes a geometrically exact formulation for three-dimensional static and dynamic analyses of the umbilical cable in a deep-sea remotely operated vehicle (ROV) system. The presented formulation takes account of the geometric nonlinearities of large displacement, effects of axial load and bending stiffness for modeling of slack cables. The resulting nonlinear second-order governing equations are discretized spatially by the finite element method and solved temporally by the generalized- α implicit time integration algorithm, which is adapted to the case of varying coefficient matrices. The ability to consider three-dimensional union action of ocean current and ship heave motion upon the umbilical cable is the key feature of this analysis. The presented formulation is firstly validated, and then three numerical examples for the umbilical cable in a deep-sea ROV system are demonstrated and discussed, including the steady configurations only under the action of depth-dependent ocean current, the dynamic responses in the case of the only ship heave motion, and in the case of the combined action of the ship heave motion and ocean current.

  6. Recovery after treatment and sensitivity to base rate.

    PubMed

    Doctor, J N

    1999-04-01

    Accurate classification of patients as having recovered after psychotherapy depends largely on the base rate of such recovery. This article presents methods for classifying participants as recovered after therapy. The approach described here considers base rate in the statistical model. These methods can be applied to psychotherapy outcome data for 2 purposes: (a) to determine the robustness of a data set to differing base-rate assumptions and (b) to formulate an appropriate cutoff that is beyond the range of cases that are not robust to plausible base-rate assumptions. Discussion addresses a fundamental premise underlying the study of recovery after psychotherapy.

  7. Universal formulation of excitonic linear absorption spectra in all semiconductor microstructures

    NASA Astrophysics Data System (ADS)

    Lefebvre, Pierre; Christol, Philippe; Mathieu, Henry

    1995-01-01

    We present a generalization of the well-known exciton absorption calculations of Elliott [Phys. Rev. 108, 1384 (1957)], in the 3-dimensional case, and of Shinada and Sugano [J. Phys. Soc. Japan 21, 1936 (1966)], for 2-dimensional media: We calculate the optical absorption spectra of bound and unbound exciton states, by using a metric space with a noninteger dimension α (1 < α), obtaining almost exactly the same theoretical lineshapes as those resulting from accurate but costly numerical approaches [Chuang et al. Phys. Rev. B, 43, 1500 (1991); Benner and Haug, Phys. Rev. B 47, 15750 (1993)].

  8. Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source

    NASA Astrophysics Data System (ADS)

    Kornienko, Y. V.; Skuratovskiy, S. I.

    2014-12-01

    The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.

  9. A case study on Measurement of Degree of Performance of an Industry by using Lean Score Technique

    NASA Astrophysics Data System (ADS)

    Srinivasa Rao, P.; Niraj, Malay

    2016-09-01

    Lean manufacturing concept is becoming a very important strategy for both academicians and practitioners in the recent times, and Japanese are using this practice for more than a decade. In this present scenario, this paper describes an innovative approach for lean performance evaluation by using fuzzy membership functions before and after implementing lean manufacturing techniques and formulating a model to establish the lean score through the lean attributes by eliminating major losses. It shows a systematic lean performance measurement by producing a final integrated unit less-score.

  10. Effect of georesource-consumer process flows on coal loss in energy supply of the Polar regions in Yakutia

    NASA Astrophysics Data System (ADS)

    Tkach, SM; Gavrilov, VL

    2017-02-01

    It is shown that the process flows of mining, haulage and utilization of coal in the Polar regions in Yakutia feature high quantitative and qualitative loss. In case the process flows are considered as integrated systems aimed at the overall performance efficiency, it is possible to reduce the loss per each individual chain loop. The authors formulate approaches intended to lower total loss of coal in process flows. The geotechnical and organizational solutions are put forward to improve and stabilize quality of fuel used by local fuel and energy industry.

  11. Relative motions of fragments of the split comets. I - A new approach

    NASA Technical Reports Server (NTRS)

    Sekanina, Z.

    1977-01-01

    A hypothesis is proposed which interprets the relative motion of two fragments of a split comet in terms of a slight difference between their effective solar attraction rather than in terms of the impulse imparted to them at separation. A quantitative version of this hypothesis is formulated by assuming that the difference in effective solar attraction varies with heliocentric distance in direct proportion to the actual solar attraction so that the ratio of the two forces is constant and equal to a measure of the relative effect between the two fragments under consideration. Results obtained using this formulation are compared with observational evidence on the split comets P/Biela, Liais 1860 I, 1882 II, P/Brooks 2 1889 V, Swift 1899 I, Kopff 1905 IV, Mellish 1915 II, Taylor 1916 I, 1947 XII, Wirtanen 1957 VI, Ikeya-Seki 1965 VIII, Kohoutek 1970 III, and West 1975n. The hypothesis is found to fail only in the case of comet Wirtanen 1957 VI. Some unusual phenomena associated with split comets are examined.

  12. Mucoadhesive ophthalmic vehicles: evaluation of polymeric low-viscosity formulations.

    PubMed

    Saettone, M F; Monti, D; Torracca, M T; Chetoni, P

    1994-01-01

    A series of polyanionic natural or semi-synthetic polymers (polygalacturonic acid, hyaluronic acid, carboxymethylamylose, carboxymethylchitin, chondroitin sulfate, heparan sulfate and mesoglycan) were evaluated as potential mucoadhesive carriers for ophthalmic drugs. Solutions containing cyclopentolate (CY) or pilocarpine (PI) as salts (or polyanionic complexes) with the acidic polymers, all showing a low viscosity, were tested for miotic (resp. mydriatic) activity in albino rabbits. In the case of some polymeric complexes, small but significant increases of the areas under the activity vs. time curves (AUC) over reference cyclopentolate hydrochloride (CYHC1) or pilocarpine nitrate (PINO3) vehicles, and significant AUC decreases after removal of precorneal mucin by treatment with N-acetylcysteine were observed. A correlation was found between these data, considered indicative of the occurrence of a mucoadhesive interaction "in vivo", and "in vitro" viscometric data expressing the polymers-mucin force of interaction. The advantages and limitations of the mucoadhesive non-viscous approach in the formulation of ophthalmic vehicles are presented and discussed.

  13. Bending response of cross-ply laminated composite plates with diagonally perturbed localized interfacial degeneration.

    PubMed

    Kam, Chee Zhou; Kueh, Ahmad Beng Hong

    2013-01-01

    A laminated composite plate element with an interface description is developed using the finite element approach to investigate the bending performance of two-layer cross-ply laminated composite plates in presence of a diagonally perturbed localized interfacial degeneration between laminae. The stiffness of the laminate is expressed through the assembly of the stiffnesses of lamina sub-elements and interface element, the latter of which is formulated adopting the well-defined virtually zero-thickness concept. To account for the extent of both shear and axial weak bonding, a degeneration ratio is introduced in the interface formulation. The model has the advantage of simulating a localized weak bonding at arbitrary locations, with various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. Numerical results show that the bending behavior of laminate is significantly affected by the aforementioned parameters, the greatest effect of which is experienced by those with a localized total interface degeneration, representing the case of local delamination.

  14. Radiative heat transfer and nonequilibrium Casimir-Lifshitz force in many-body systems with planar geometry

    NASA Astrophysics Data System (ADS)

    Latella, Ivan; Ben-Abdallah, Philippe; Biehs, Svend-Age; Antezza, Mauro; Messina, Riccardo

    2017-05-01

    A general theory of photon-mediated energy and momentum transfer in N -body planar systems out of thermal equilibrium is introduced. It is based on the combination of the scattering theory and the fluctuational-electrodynamics approach in many-body systems. By making a Landauer-like formulation of the heat transfer problem, explicit formulas for the energy transmission coefficients between two distinct slabs as well as the self-coupling coefficients are derived and expressed in terms of the reflection and transmission coefficients of the single bodies. We also show how to calculate local equilibrium temperatures in such systems. An analogous formulation is introduced to quantify momentum transfer coefficients describing Casimir-Lifshitz forces out of thermal equilibrium. Forces at thermal equilibrium are readily obtained as a particular case. As an illustration of this general theoretical framework, we show on three-body systems how the presence of a fourth slab can impact equilibrium temperatures in heat-transfer problems and equilibrium positions resulting from the forces acting on the system.

  15. Electromagnetic scattering from two-dimensional thick material junctions

    NASA Technical Reports Server (NTRS)

    Ricoy, M. A.; Volakis, John L.

    1990-01-01

    The problem of the plane wave diffraction is examined by an arbitrary symmetric two dimensional junction, where Generalized Impedance Boundary Conditions (GIBCs) and Generalized Sheet Transition Conditions (GSTCs) are employed to simulate the slabs. GIBCs and GSTCs are constructed for multilayer planar slabs of arbitrary thickness and the resulting GIBC/GSTC reflection coefficients are compared with exact counterparts to evaluate the GIBCs/GSTCs. The plane wave diffraction by a multilayer material slab recessed in a perfectly conducting ground plane is formulated and solved via the Generalized Scattering Matrix Formulation (GDMF) in conjunction with the dual integral equation approach. Various scattering patterns are computed and validated with exact results where possible. The diffraction by a material discontinuity in a thick dielectric/ferrite slab is considered by modelling the constituent slabs with GSTCs. A non-unique solution in terms of unknown constants is obtained, and these constants are evaluated for the recessed slab geometry by comparison with the solution obtained therein. Several other simplified cases are also presented and discussed. An eigenfunction expansion method is introduced to determine the unknown solution constants in the general case. This procedure is applied to the non-unique solution in terms of unknown constants; and scattering patterns are presented for various slab junctions and compared with alternative results where possible.

  16. Geometrically Nonlinear Finite Element Analysis of a Composite Space Reflector

    NASA Technical Reports Server (NTRS)

    Lee, Kee-Joo; Leet, Sung W.; Clark, Greg; Broduer, Steve (Technical Monitor)

    2001-01-01

    Lightweight aerospace structures, such as low areal density composite space reflectors, are highly flexible and may undergo large deflection under applied loading, especially during the launch phase. Accordingly, geometrically nonlinear analysis that takes into account the effect of finite rotation may be needed to determine the deformed shape for a clearance check and the stress and strain state to ensure structural integrity. In this study, deformation of the space reflector is determined under static conditions using a geometrically nonlinear solid shell finite element model. For the solid shell element formulation, the kinematics of deformation is described by six variables that are purely vector components. Because rotational angles are not used, this approach is free of the limitations of small angle increments. This also allows easy connections between substructures and large load increments with respect to the conventional shell formulation using rotational parameters. Geometrically nonlinear analyses were carried out for three cases of static point loads applied at selected points. A chart shows results for a case when the load is applied at the center point of the reflector dish. The computed results capture the nonlinear behavior of the composite reflector as the applied load increases. Also, they are in good agreement with the data obtained by experiments.

  17. Effects of surfaces and leachables on the stability of biopharmaceuticals.

    PubMed

    Bee, Jared S; Randolph, Theodore W; Carpenter, John F; Bishop, Steven M; Dimitrova, Mariana N

    2011-10-01

    Therapeutic proteins are exposed to various potential contact surfaces, particles, and leachables during manufacturing, shipping, storage, and delivery. In this review, we present published examples of interfacial- or leachable-induced aggregation or particle formation, and discuss the mitigation strategies that were successfully utilized. Adsorption to interfaces or interactions with leachables and/or particles in some cases has been reported to cause protein aggregation or particle formation. Identification of the cause(s) of particle formation involving minute amounts of protein over extended periods of time can be challenging. Various formulation strategies such as addition of a nonionic surfactant (e.g., polysorbate) have been demonstrated to effectively mitigate adsorption-induced protein aggregation. However, not all stability problems associated with interfaces or leachables are best resolved by formulation optimization. Detectable leachables do not necessarily have any adverse impact on the protein but control of the leachable source is preferred when there is a concern. In other cases, preventing protein aggregation and particle formation may require manufacturing process and/or equipment changes, use of compatible materials at contact interfaces, and so on. This review summarizes approaches that have been used to minimize protein aggregation and particle formation during manufacturing and fill-finish operations, product storage and transportation, and delivery of protein therapeutics. Copyright © 2011 Wiley-Liss, Inc.

  18. Locally adaptive decision in detection of clustered microcalcifications in mammograms.

    PubMed

    Sainz de Cea, María V; Nishikawa, Robert M; Yang, Yongyi

    2018-02-15

    In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value  <10 -4 ). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.

  19. Locally adaptive decision in detection of clustered microcalcifications in mammograms

    NASA Astrophysics Data System (ADS)

    Sainz de Cea, María V.; Nishikawa, Robert M.; Yang, Yongyi

    2018-02-01

    In computer-aided detection or diagnosis of clustered microcalcifications (MCs) in mammograms, the performance often suffers from not only the presence of false positives (FPs) among the detected individual MCs but also large variability in detection accuracy among different cases. To address this issue, we investigate a locally adaptive decision scheme in MC detection by exploiting the noise characteristics in a lesion area. Instead of developing a new MC detector, we propose a decision scheme on how to best decide whether a detected object is an MC or not in the detector output. We formulate the individual MCs as statistical outliers compared to the many noisy detections in a lesion area so as to account for the local image characteristics. To identify the MCs, we first consider a parametric method for outlier detection, the Mahalanobis distance detector, which is based on a multi-dimensional Gaussian distribution on the noisy detections. We also consider a non-parametric method which is based on a stochastic neighbor graph model of the detected objects. We demonstrated the proposed decision approach with two existing MC detectors on a set of 188 full-field digital mammograms (95 cases). The results, evaluated using free response operating characteristic (FROC) analysis, showed a significant improvement in detection accuracy by the proposed outlier decision approach over traditional thresholding (the partial area under the FROC curve increased from 3.95 to 4.25, p-value  <10-4). There was also a reduction in case-to-case variability in detected FPs at a given sensitivity level. The proposed adaptive decision approach could not only reduce the number of FPs in detected MCs but also improve case-to-case consistency in detection.

  20. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less

  1. Practical optimization of Steiner trees via the cavity method

    NASA Astrophysics Data System (ADS)

    Braunstein, Alfredo; Muntoni, Anna

    2016-07-01

    The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.

  2. An inclusive SUSY approach to position dependent mass systems

    NASA Astrophysics Data System (ADS)

    Karthiga, S.; Chithiika Ruby, V.; Senthilvelan, M.

    2018-06-01

    The supersymmetry (SUSY) formalism for a position dependent mass problem with a more general ordering is yet to be formulated. In this paper, we present an unified SUSY approach for PDM problems of any ordering. Highlighting all non-Hermitian Hamiltonians of PDM problems are of quasi-Hermitian nature, the SUSY operators of these problems are constructed using similarity transformation. The methodology that we propose here is applicable for even more general cases where the kinetic energy term is represented by linear combination of infinite number of possible orderings. We illustrate the method with an example, namely Mathews-Lakshmanan (ML) oscillator. Our results show that the latter system is shape invariant for all possible orderings. We derive eigenvalues and eigenvectors of this nonlinear oscillator for all possible orderings including Hermitian and non-Hermitian ones.

  3. Fundamental aspects in quantitative ultrasonic determination of fracture toughness: The scattering of a single ellipsoidal inhomogeneity

    NASA Technical Reports Server (NTRS)

    Fu, L. S. W.

    1982-01-01

    The scattering of a single ellipsoidal inhomogeneity is studied via an eigenstrain approach. The displacement field is given in terms of volume integrals that involve eigenstrains that are related to mismatch in mass density and that in elastic moduli. The governing equations for these unknown eigenstrains are derived. Agreement with other approaches for the scattering problem is shown. The formulation is general and both the inhomogeneity and the host medium can be anisotrophic. The axisymmetric scattering of an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the scattered displacement field, the differential and total cross sections are formally given in series expansions for the case of uniformly distributed eigenstrains.

  4. The Landau-de Gennes approach revisited: A minimal self-consistent microscopic theory for spatially inhomogeneous nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Gârlea, Ioana C.; Mulder, Bela M.

    2017-12-01

    We design a novel microscopic mean-field theory of inhomogeneous nematic liquid crystals formulated entirely in terms of the tensor order parameter field. It combines the virtues of the Landau-de Gennes approach in allowing both the direction and magnitude of the local order to vary, with a self-consistent treatment of the local free-energy valid beyond the small order parameter limit. As a proof of principle, we apply this theory to the well-studied problem of a colloid dispersed in a nematic liquid crystal by including a tunable wall coupling term. For the two-dimensional case, we investigate the organization of the liquid crystal and the position of the point defects as a function of the strength of the coupling constant.

  5. Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Del-Castillo-Negrete, Diego

    2012-03-01

    Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (the transport-coefficient ratio χ/χ˜10^10 in fusion plasmas). Recently, a novel Lagrangian Green's function method has been proposedfootnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011); D. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, submitted (2011) to solve the local and non-local purely parallel transport equation in general 3D magnetic fields. The approach avoids numerical pollution, is inherently positivity-preserving, and is scalable algorithmically (i.e., work per degree-of-freedom is grid-independent). In this poster, we discuss the extension of the Lagrangian Green's function approach to include perpendicular transport terms and sources. We present an asymptotic-preserving numerical formulation, which ensures a consistent numerical discretization temporally and spatially for arbitrary χ/χ ratios. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.

  6. A Novel Approach of Battery Energy Storage for Improving Value of Wind Power in Deregulated Markets

    NASA Astrophysics Data System (ADS)

    Nguyen, Y. Minh; Yoon, Yong Tae

    2013-06-01

    Wind power producers face many regulation costs in deregulated environment, which remarkably lowers the value of wind power in comparison with the conventional sources. One of these costs is associated with the real-time variation of power output and being paid in frequency control market according to the variation band. In this regard, this paper presents a new approach to the scheduling and operation of battery energy storage installed in wind generation system. This approach depends on the statistic data of wind generation and the prediction of frequency control market prices to determine the optimal charging and discharging of batteries in real-time, which ultimately gives the minimum cost of frequency regulation for wind power producers. The optimization problem is formulated as the trade-off between the decrease in regulation payment and the increase in the cost of using battery energy storage. The approach is illustrated in the case study and the results of simulation show its effectiveness.

  7. Application of a New Hybrid RANS/LES Modeling Paradigm to Compressible Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Pederson, Clark; Haering, Sigfried; Moser, Robert

    2017-11-01

    It is well-known that traditional hybrid RANS/LES modeling approaches suffer from a number of deficiencies. These deficiencies often stem from overly simplistic blending strategies based on scalar measures of turbulence length scale and grid resolution and from use of isotropic subgrid models in LES regions. A recently developed hybrid modeling approach has shown promise in overcoming these deficiencies in incompressible flows [Haering, 2015]. In the approach, RANS/LES blending is accomplished using a hybridization parameter that is governed by an additional model transport equation and is driven to achieve equilibrium between the resolved and unresolved turbulence for the given grid. Further, the model uses an tensor eddy viscosity that is formulated to represent the effects of anisotropic grid resolution on subgrid quantities. In this work, this modeling approach is extended to compressible flows and implemented in the compressible flow solver SU2 (http://su2.stanford.edu/). We discuss both modeling and implementation challenges and show preliminary results for compressible flow test cases with smooth wall separation.

  8. Implementing the Biopharmaceutics Classification System in Drug Development: Reconciling Similarities, Differences, and Shared Challenges in the EMA and US-FDA-Recommended Approaches.

    PubMed

    Cardot, J-M; Garcia Arieta, A; Paixao, P; Tasevska, I; Davit, B

    2016-07-01

    The US-FDA recently posted a draft guideline for industry recommending procedures necessary to obtain a biowaiver for immediate-release oral dosage forms based on the Biopharmaceutics Classification System (BCS). This review compares the present FDA BCS biowaiver approach, with the existing European Medicines Agency (EMA) approach, with an emphasis on similarities, difficulties, and shared challenges. Some specifics of the current EMA BCS guideline are compared with those in the recently published draft US-FDA BCS guideline. In particular, similarities and differences in the EMA versus US-FDA approaches to establishing drug solubility, permeability, dissolution, and formulation suitability for BCS biowaiver are critically reviewed. Several case studies are presented to illustrate the (i) challenges of applying for BCS biowaivers for global registration in the face of differences in the EMA and US-FDA BCS biowaiver criteria, as well as (ii) challenges inherent in applying for BCS class I or III designation and common to both jurisdictions.

  9. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Lyu, Q; Ruan, D

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solvemore » the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.« less

  10. Discrete-time moment closure models for epidemic spreading in populations of interacting individuals.

    PubMed

    Frasca, Mattia; Sharkey, Kieran J

    2016-06-21

    Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  12. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  13. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.

    1989-01-01

    The progress made toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-Orbit engine hot section components is reported. The convective viscous integral formulation was derived and implemented in the general purpose computer program GP-BEST. The new convective kernel functions, in turn, necessitated the development of refined integration techniques. As a result, however, since the physics of the problem is embedded in these kernels, boundary element solutions can now be obtained at very high Reynolds number. Flow around obstacles can be solved approximately with an efficient linearized boundary-only analysis or, more exactly, by including all of the nonlinearities present in the neighborhood of the obstacle. The other major accomplishment was the development of a comprehensive fluid-structure interaction capability within GP-BEST. This new facility is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code (GP-BEST) can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach.

  14. Reexamination of relaxation of spins due to a magnetic field gradient: Identity of the Redfield and Torrey theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, R.; Rohm, Ryan M.; Swank, C. M.

    2011-02-15

    There is an extensive literature on magnetic-gradient-induced spin relaxation. Cates, Schaefer, and Happer, in a seminal publication, have solved the problem in the regime where diffusion theory (the Torrey equation) is applicable using an expansion of the density matrix in diffusion equation eigenfunctions and angular momentum tensors. McGregor has solved the problem in the same regime using a slightly more general formulation using the Redfield theory formulated in terms of the autocorrelation function of the fluctuating field seen by the spins and calculating the correlation functions using the diffusion-theory Green's function. The results of both calculations were shown to agreemore » for a special case. In the present work, we show that the eigenfunction expansion of the Torrey equation yields the expansion of the Green's function for the diffusion equation, thus showing the identity of this approach with that of the Redfield theory. The general solution can also be obtained directly from the Torrey equation for the density matrix. Thus, the physical content of the Redfield and Torrey approaches are identical. We then introduce a more general expression for the position autocorrelation function of particles moving in a closed cell, extending the range of applicability of the theory.« less

  15. Some experiences with the viscous-inviscid interaction approach

    NASA Technical Reports Server (NTRS)

    Vandalsem, W. R.; Steger, J. L.; Rao, K. V.

    1987-01-01

    Methods for simulating compressible viscous flow using the viscid-inviscid interaction approach are described. The formulations presented range from the more familiar full-potential/boundary-layer interaction schemes to a method for coupling Euler/Navier-Stokes and boundary-layer algorithms. An effort is made to describe the advantages and disadvantages of each formulation. Sample results are presented which illustrate the applicability of the methods.

  16. Clean, agile alternative binders, additives and plasticizers for propellant and explosive formulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D.M.; Hawkins, T.W.; Lindsay, G.A.

    1994-12-01

    As part of the Strategic Environmental Research and Development Program (SERDP) a clean, agile manufacturing of explosives, propellants and pyrotechniques (CANPEP) effort set about to identify new approaches to materials and processes for producing propellants, explosives and pyrotechniques (PEP). The RDX based explosive PBXN-109 and gun propellant M-43 were identified as candidates for which waste minimization and recycling modifications might be implemented in a short time frame. The binders, additives and plasticizers subgroup identified cast non-curable thermoplastic elastomer (TPE) formulations as possible replacement candidates for these formulations. Paste extrudable explosives were also suggested as viable alternatives to PBXN-109. Commercial inertmore » and energetic TPEs are reviewed. Biodegradable and hydrolyzable binders are discussed. The applicability of various types of explosive formulations are reviewed and some issues associated with implementation of recyclable formulations are identified. It is clear that some processing and weaponization modifications will need to be made if any of these approaches are to be implemented. The major advantages of formulations suggested here over PBXN-109 and M-43 is their reuse/recyclability. Formulations using TPE or Paste could by recovered from a generic bomb or propellant and reused if they met specification or easily reprocessed and sold to the mining industry.« less

  17. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  18. Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    2013-01-01

    The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.

  19. Crystallization of glass-forming liquids: Specific surface energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmelzer, Jürn W. P., E-mail: juern-w.schmelzer@uni-rostock.de; Abyzov, Alexander S.

    2016-08-14

    A generalization of the Stefan-Skapski-Turnbull relation for the melt-crystal specific interfacial energy is developed in terms of the generalized Gibbs approach extending its standard formulation to thermodynamic non-equilibrium states. With respect to crystal nucleation, this relation is required in order to determine the parameters of the critical crystal clusters being a prerequisite for the computation of the work of critical cluster formation. As one of its consequences, a relation for the dependence of the specific surface energy of critical clusters on temperature and pressure is derived applicable for small and moderate deviations from liquid-crystal macroscopic equilibrium states. Employing the Stefan-Skapski-Turnbullmore » relation, general expressions for the size and the work of formation of critical crystal clusters are formulated. The resulting expressions are much more complex as compared to the respective relations obtained via the classical Gibbs theory. Latter relations are retained as limiting cases of these more general expressions for moderate undercoolings. By this reason, the formulated, here, general relations for the specification of the critical cluster size and the work of critical cluster formation give a key for an appropriate interpretation of a variety of crystallization phenomena occurring at large undercoolings which cannot be understood in terms of the Gibbs’ classical treatment.« less

  20. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  1. Abuse-deterrent formulations, an evolving technology against the abuse and misuse of opioid analgesics.

    PubMed

    Schaeffer, Tammi

    2012-12-01

    The increased use of opioid pain medication has been mirrored by the increased misuse and abuse of these drugs. As part of a multidisciplinary approach to this epidemic, pharmaceutical companies, with the encouragement of the Food and Drug Administration, have increased the development of abuse-deterrent formulations. While all have the goal of treating pain while mitigating misuse and abuse, there are different technologies utilized to impart the abuse-deterrent properties. The goal of this paper is to review the basis of abuse-deterrent formulations, the different types and approaches of some of the abuse-deterrent products, and their current regulatory status in the USA.

  2. Phospholipid-based solid drug formulations for oral bioavailability enhancement: A meta-analysis.

    PubMed

    Fong, Sophia Yui Kau; Brandl, Martin; Bauer-Brandl, Annette

    2015-12-01

    Low bioavailability nowadays often represents a challenge in oral dosage form development. Solid formulations composed of drug and phospholipid (PL), which, upon contact with water, eventually form multilamellar liposomes (i.e. 'proliposomes'), are an emerging approach to solve such issue. Regarded as an 'improved' version of liposomes concerning storage stability, the potential and versatility of a range of such formulations for oral drug delivery have been extensively discussed. However, a systematic and quantitative analysis of the studies that applied solid PL for oral bioavailability enhancement is currently lacking. Such analysis is necessary for providing an overview of the research progress and addressing the question on how promising this approach can be on bioavailability enhancement. The current review performed a systematic search of references in three evidence-based English databases, Medline, Embase, and SciFinder, from the year of 1985 up till March 2015. A total of 112 research articles and 82 patents that involved solid PL-based formulations were identified. The majority of such formulations was intended for oral drug delivery (55%) and was developed to address low bioavailability issues (49%). A final of 54 studies that applied such formulations for bioavailability enhancement of 43 different drugs with poor water solubility and/or permeability were identified. These proof-of-concept studies with in vitro (n=31) and/or animal (n=23) evidences have been systematically summarized. Meta-analyses were conducted to measure the overall enhancement power (percent increase compared to control group) of solid PL formulations on drugs' solubility, permeability and oral bioavailability, which were found to be 127.4% (95% CI [86.1, 168.7]), 59.6% (95% CI [30.1, 89.0]), and 18.5% (95% CI [10.1, 26.9]) respectively. Correlations between the enhancement factors and in silico physiochemical properties of drugs were also performed to check if such approach can be used to identify the best candidates for oral solid PL formulation. In addition to scientific literature, 13 solid PL formulation-related patents that addressed the issue of low oral bioavailability have been identified and summarized; whereas no clinical study was identified from the current search. By providing systematic information and meta-analysis on studies that applied the principle of 'proliposomes' for oral bioavailability enhancement, the current review should be insightful for formulation scientists who wish to adopt the PL based approach to overcome the solubility, permeability and bioavailability issues of orally delivered drugs. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Towards a unified solution of localization failure with mixed finite elements

    NASA Astrophysics Data System (ADS)

    Benedetti, Lorenzo; Cervera, Miguel; Chiumenti, Michele; Zeidler, Antonia; Fischer, Jan-Thomas

    2015-04-01

    Notwithstanding computational scientists made significant steps in the numerical simulation of failure in last three decades, the strain localization problem is still an open question. Especially in a geotechnical setting, when dealing with stability analysis of slopes, it is necessary to provide correct distribution of displacements, to evaluate the stresses in the ground and, therefore, to be able to identify the slip lines that brings to progressive collapse of the slope. Finite elements are an attractive method of solution thanks to profound mathematical foundations and the possibility of describing generic geometries. In order to account for the onset of localization band, the smeared crack approach [1] is introduced, that is the strain localization is assumed to occur in a band of finite width where the displacements are continuous and the strains are discontinuous but bounded. It is well known that this kind of approach poses some challenges. The standard irreducible formulation of FEM is known to be heavily affected by spurious mesh dependence when softening behavior occurs and, consequently, slip lines evolution is biased by the orientation of the mesh. Moreover, in the case of isochoric behavior, unbounded pressure oscillations arise and the consequent locking of the stresses pollutes the numerical solution. Both problems can be shown not to be related to the mathematical statement of the continuous problem but instead to its discrete (FEM) counterpart. Mixed finite element formulations represent a suitable alternative to mitigate these drawbacks. As it has been shown in previous works by Cervera [2], a mixed formulation in terms of displacements and pressure not only provides a propitious solution to the problem of incompressibility, but also it was found to possess the needed robustness in case of strain concentration. This presentation introduces a (stabilized) mixed finite element formulation with continuous linear strain and displacement interpolations. As a fundamental enhancement of the displacement-pressure formulation above mentioned, this kind of formulation benefits of the following advantages: it provides enhanced rate of convergence for the strain (and stress) and it is able to deal with incompressible situations. The method is completed with constitutive laws from Von Mises and Drucker-Prager local plasticity models with nonlinear strain softening. Moreover, global and local error norms are discussed to support the advantages of the proposed method. Then, numerical examples of stability analysis of slopes are presented to demonstrate the capability of the method. It will be shown that not only soil slopes can be modeled but also snow avalanche release and their weak layer fracture can be similarly treated. Consequently, this formulation appears to be a general and accurate tool for the solution of mechanical problem involving failure with localization bands [3,4]. References [1] Y.R. Rashid, 'Ultimate strength analysis of prestressed concrete pressure vessels', Nuclear Engineering and Design, Volume 7, Issue 4, April, Pages 334-344, 1968. [2] M. Cervera, M. Chiumenti, D. Di Capua. 'Benchmarking on bifurcation and localization in J 2 plasticity for plane stress and plane strain conditions.' Computer Methods in Applied Mechanics and Engineering, Vol. 241-244, Pages 206-224, 2012. [3] L. Benedetti, M. Cervera, M. Chiumenti. 'Stress-accurate mixed FEM for soil failure under shallow foundations involving strain localization in plasticity' Computers and Geotechnics, Vol. 64, pp. 32-47, 2015. [4] Cervera, M., Chiumenti, M., Benedetti, L., Codina, R. 'Mixed stabilized finite element methods in nonlinear solid mechanics. Part III: Compressible and incompressible plasticity' Computer Methods in Applied Mechanics and Engineering, to appear, 2015.

  4. The role of geomatics in supporting sustainable development policy-making

    NASA Astrophysics Data System (ADS)

    Zhang, Aining

    Sustainable development has been on national policy agendas since 1992 when Agenda 21, an international agreement on sustainable development, was signed by over 150 countries. A key to sustainable development policy-making is information. Spatial information is an integral part of this information pool given the spatial nature of sustainable development. Geomatics, a technology dealing specifically with spatial information, can play a major role in support of the policy-making process. This thesis is aimed at advancing this role. The thesis starts with a discussion of theories and methodologies for sustainable development. The policy process for sustainable development is characterized, followed by an analysis of the requirements of sustainable development policy-making for geomatics support. The current status of geomatics in meeting these requirements is then examined, and the challenges and potential for geomatics to further address the needs are identified. To deal with these challenges, an integrated solution, namely the development of an on-line national policy atlas for sustainable development, is proposed, with a focus to support policy action formulation. The thesis further addresses one of the major research topics required for the implementation of the proposed solution, namely the exploration of the feasibility of a spatial statistics approach to predictive modelling in support of policy scenario assessments. The study is based on the case of national climate change policy formulation, with a focus on the development of new light duty vehicle sales mix models in support of transportation fuel efficiency policy-making aimed at greenhouse gas reductions. The conceptual framework and methodology for the case study are followed by the presentation of outcomes including models and policy scenario forecasts. The case study has illustrated that a spatial statistics approach is not only feasible for the development of predictive models in support of policy-making, but also provides several unique advantages that could potentially improve sustainable development policymaking.

  5. Neural network based adaptive output feedback control: Applications and improvements

    NASA Astrophysics Data System (ADS)

    Kutay, Ali Turker

    Application of recently developed neural network based adaptive output feedback controllers to a diverse range of problems both in simulations and experiments is investigated in this thesis. The purpose is to evaluate the theory behind the development of these controllers numerically and experimentally, identify the needs for further development in practical applications, and to conduct further research in directions that are identified to ultimately enhance applicability of adaptive controllers to real world problems. We mainly focus our attention on adaptive controllers that augment existing fixed gain controllers. A recently developed approach holds great potential for successful implementations on real world applications due to its applicability to systems with minimal information concerning the plant model and the existing controller. In this thesis the formulation is extended to the multi-input multi-output case for distributed control of interconnected systems and successfully tested on a formation flight wind tunnel experiment. The command hedging method is formulated for the approach to further broaden the class of systems it can address by including systems with input nonlinearities. Also a formulation is adopted that allows the approach to be applied to non-minimum phase systems for which non-minimum phase characteristics are modeled with sufficient accuracy and treated properly in the design of the existing controller. It is shown that the approach can also be applied to augment nonlinear controllers under certain conditions and an example is presented where the nonlinear guidance law of a spinning projectile is augmented. Simulation results on a high fidelity 6 degrees-of-freedom nonlinear simulation code are presented. The thesis also presents a preliminary adaptive controller design for closed loop flight control with active flow actuators. Behavior of such actuators in dynamic flight conditions is not known. To test the adaptive controller design in simulation, a fictitious actuator model is developed that fits experimentally observed characteristics of flow control actuators in static flight conditions as well as possible coupling effects between actuation, the dynamics of flow field, and the rigid body dynamics of the vehicle.

  6. High-Order Entropy Stable Formulations for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.

    2013-01-01

    A systematic approach is presented for developing entropy stable (SS) formulations of any order for the Navier-Stokes equations. These SS formulations discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality. They are valid for smooth as well as discontinuous flows provided sufficient dissipation is added at shocks and discontinuities. Entropy stable formulations exist for all diagonal norm, summation-by-parts (SBP) operators, including all centered finite-difference operators, Legendre collocation finite-element operators, and certain finite-volume operators. Examples are presented using various entropy stable formulations that demonstrate the current state-of-the-art of these schemes.

  7. Amazon collapse in the next century: exploring the sensitivity to climate and model formulation uncertainties

    NASA Astrophysics Data System (ADS)

    Booth, B.; Collins, M.; Harris, G.; Chris, H.; Jones, C.

    2007-12-01

    A number of recent studies have highlighted the risk of abrupt dieback of the Amazon Rain Forest as the result of climate changes over the next century. The recent 2005 Amazon drought brought wider acceptance of the idea that that climate drivers will play a significant role in future rain forest stability, yet that stability is still subject to considerable degree of uncertainty. We present a study which seeks to explore some of the underlying uncertainties both in the climate drivers of dieback and in the terrestrial land surface formulation used in GCMs. We adopt a perturbed physics approach which forms part of a wider project which is covered in an accompanying abstract submitted to the multi-model ensembles session. We first couple the same interactive land surface model to a number of different versions of the Hadley Centre atmosphere-ocean model that exhibit a wide range of different physical climate responses in the future. The rainforest extent is shown to collapse in all model cases but the timing of the collapse is dependent on the magnitude of the climate drivers. In the second part, we explore uncertainties in the terrestrial land surface model using the perturbed physics ensemble approach, perturbing uncertain parameters which have an important role in the vegetation and soil response. Contrasting the two approaches enables a greater understanding of the relative importance of climatic and land surface model uncertainties in Amazon dieback.

  8. Polymer Fluid Dynamics: Continuum and Molecular Approaches.

    PubMed

    Bird, R B; Giacomin, A J

    2016-06-07

    To solve problems in polymer fluid dynamics, one needs the equations of continuity, motion, and energy. The last two equations contain the stress tensor and the heat-flux vector for the material. There are two ways to formulate the stress tensor: (a) One can write a continuum expression for the stress tensor in terms of kinematic tensors, or (b) one can select a molecular model that represents the polymer molecule and then develop an expression for the stress tensor from kinetic theory. The advantage of the kinetic theory approach is that one gets information about the relation between the molecular structure of the polymers and the rheological properties. We restrict the discussion primarily to the simplest stress tensor expressions or constitutive equations containing from two to four adjustable parameters, although we do indicate how these formulations may be extended to give more complicated expressions. We also explore how these simplest expressions are recovered as special cases of a more general framework, the Oldroyd 8-constant model. Studying the simplest models allows us to discover which types of empiricisms or molecular models seem to be worth investigating further. We also explore equivalences between continuum and molecular approaches. We restrict the discussion to several types of simple flows, such as shearing flows and extensional flows, which are of greatest importance in industrial operations. Furthermore, if these simple flows cannot be well described by continuum or molecular models, then it is not necessary to lavish time and energy to apply them to more complex flow problems.

  9. On mass and momentum conservation in the variable-parameter Muskingum method

    NASA Astrophysics Data System (ADS)

    Reggiani, Paolo; Todini, Ezio; Meißner, Dennis

    2016-12-01

    In this paper we investigate mass and momentum conservation in one-dimensional routing models. To this end we formulate the conservation equations for a finite-dimensional reach and compute individual terms using three standard Saint-Venant (SV) solvers: SOBEK, HEC-RAS and MIKE11. We also employ two different variable-parameter Muskingum (VPM) formulations: the classical Muskingum-Cunge (MC) and the revised, mass-conservative Muskingum-Cunge-Todini (MCT) approach, whereby geometrical cross sections are treated analytically in both cases. We initially compare the three SV solvers for a straight mild-sloping prismatic channel with geometric cross sections and a synthetic hydrograph as boundary conditions against the analytical MC and MCT solutions. The comparison is substantiated by the fact that in this flow regime the conditions for the parabolic equation model solved by MC and MCT are met. Through this intercomparison we show that all approaches have comparable mass and momentum conservation properties, except the MC. Then we extend the MCT to use natural cross sections for a real irregular river channel forced by an observed triple-peak event and compare the results with SOBEK. The model intercomparison demonstrates that the VPM in the form of MCT can be a computationally efficient, fully mass and momentum conservative approach and therefore constitutes a valid alternative to Saint-Venant based flood wave routing for a wide variety of rivers and channels in the world when downstream boundary conditions or hydraulic structures are non-influential.

  10. A micromorphic model for steel fiber reinforced concrete.

    PubMed

    Oliver, J; Mora, D F; Huespe, A E; Weyler, R

    2012-10-15

    A new formulation to model the mechanical behavior of high performance fiber reinforced cement composites with arbitrarily oriented short fibers is presented. The formulation can be considered as a two scale approach, in which the macroscopic model, at the structural level, takes into account the mesostructural phenomenon associated with the fiber-matrix interface bond/slip process. This phenomenon is contemplated by including, in the macroscopic description, a micromorphic field representing the relative fiber-cement displacement. Then, the theoretical framework, from which the governing equations of the problem are derived, can be assimilated to a specific case of the material multifield theory. The balance equation derived for this model, connecting the micro stresses with the micromorphic forces, has a physical meaning related with the fiber-matrix bond slip mechanism. Differently to previous procedures in the literature, addressed to model fiber reinforced composites, where this equation has been added as an additional independent ingredient of the methodology, in the present approach it arises as a natural result derived from the multifield theory. Every component of the composite is defined with a specific free energy and constitutive relation. The mixture theory is adopted to define the overall free energy of the composite, which is assumed to be homogeneously constituted, in the sense that every infinitesimal volume is occupied by all the components in a proportion given by the corresponding volume fraction. The numerical model is assessed by means of a selected set of experiments that prove the viability of the present approach.

  11. Formulation and Optimization of Multiparticulate Drug Delivery System Approach for High Drug Loading.

    PubMed

    Shah, Neha; Mehta, Tejal; Gohel, Mukesh

    2017-08-01

    The aim of the present work was to develop and optimize multiparticulate formulation viz. pellets of naproxen by employing QbD and risk assessment approach. Mixture design with extreme vertices was applied to the formulation with high loading of drug (about 90%) and extrusion-spheronization as a process for manufacturing pellets. Independent variables chosen were level of microcrystalline cellulose (MCC)-X 1 , polyvinylpyrrolidone K-90 (PVP K-90)-X 2 , croscarmellose sodium (CCS)-X 3 , and polacrilin potassium (PP)-X 4 . Dependent variables considered were disintegration time (DT)-Y 1 , sphericity-Y 2 , and percent drug release-Y 3 . The formulation was optimized based on the batches generated by MiniTab 17 software. The batch with maximum composite desirability (0.98) proved to be optimum. From the evaluation of design batches, it was observed that, even in low variation, the excipients affect the pelletization property of the blend and also the final drug release. In conclusion, pellets with high drug loading can be effectively manufactured and optimized systematically using QbD approach.

  12. Integrated control-structure design

    NASA Technical Reports Server (NTRS)

    Hunziker, K. Scott; Kraft, Raymond H.; Bossi, Joseph A.

    1991-01-01

    A new approach for the design and control of flexible space structures is described. The approach integrates the structure and controller design processes thereby providing extra opportunities for avoiding some of the disastrous effects of control-structures interaction and for discovering new, unexpected avenues of future structural design. A control formulation based on Boyd's implementation of Youla parameterization is employed. Control design parameters are coupled with structural design variables to produce a set of integrated-design variables which are selected through optimization-based methodology. A performance index reflecting spacecraft mission goals and constraints is formulated and optimized with respect to the integrated design variables. Initial studies have been concerned with achieving mission requirements with a lighter, more flexible space structure. Details of the formulation of the integrated-design approach are presented and results are given from a study involving the integrated redesign of a flexible geostationary platform.

  13. Skin rash during treatment with generic itraconazole.

    PubMed

    De Vuono, Antonio; Palleria, Caterina; Scicchitano, Francesca; Squillace, Aida; De Sarro, Giovambattista; Gallelli, Luca

    2014-04-01

    Generic drugs have the same active substance, the same pharmaceutical form, the same therapeutic indications and a similar bioequivalence with the reference medicinal product (branded). Although a similar efficacy is postulated, some cases of clinical inefficacy during treatment with generic formulations have been reported. In this case, we describe a woman with onychomycosis that developed a skin rash during treatment with a generic formulation of itraconazole. Drug administration and its re-challenge confirmed the association between itraconazole and skin rash. Both Naranjo probability scale and World Health Organization causality assessment scale documented a probable association between generic-itraconazole and skin rash. The switch from generic formulation to brand one induced an improvement of symptoms. Since we are unable to evaluate the role of each excipient in the development of skin rash, we cannot rule out their involvement. However, more data are necessary to better define the similarities or differences between branded and generic formulations.

  14. Skin rash during treatment with generic itraconazole

    PubMed Central

    De Vuono, Antonio; Palleria, Caterina; Scicchitano, Francesca; Squillace, Aida; De Sarro, Giovambattista; Gallelli, Luca

    2014-01-01

    Generic drugs have the same active substance, the same pharmaceutical form, the same therapeutic indications and a similar bioequivalence with the reference medicinal product (branded). Although a similar efficacy is postulated, some cases of clinical inefficacy during treatment with generic formulations have been reported. In this case, we describe a woman with onychomycosis that developed a skin rash during treatment with a generic formulation of itraconazole. Drug administration and its re-challenge confirmed the association between itraconazole and skin rash. Both Naranjo probability scale and World Health Organization causality assessment scale documented a probable association between generic-itraconazole and skin rash. The switch from generic formulation to brand one induced an improvement of symptoms. Since we are unable to evaluate the role of each excipient in the development of skin rash, we cannot rule out their involvement. However, more data are necessary to better define the similarities or differences between branded and generic formulations. PMID:24799820

  15. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-21

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  16. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  17. A framework for evaluation of flood management strategies.

    PubMed

    Hansson, K; Danielson, M; Ekenberg, L

    2008-02-01

    The resulting impact of disasters on society depends on the affected country's economic strength prior to the disaster. The larger the disaster and the smaller the economy, the more significant is the impact. This is clearest seen in developing countries, where weak economies become even weaker afterwards. Deliberate strategies for the sharing of losses from hazardous events may aid a country or a community in efficiently using scarce prevention and mitigation resources, thus being better prepared for the effects of a disaster. Nevertheless, many governments lack an adequate institutional system for applying cost effective and reliable technologies for disaster prevention, early warnings, and mitigation. Modelling by event analyses and strategy models is one way of planning ahead, but these models have so far not been linked together. An approach to this problem was taken during a large study in Hungary, the Tisza case study, where a number of policy strategies for spreading of flood loss were formulated. In these strategies, a set of parameters of particular interest were extracted from interviews with stakeholders in the region. However, the study was focused on emerging economies, and, in particular, on insurance strategies. The scope is now extended to become a functional framework also for developing countries. In general, they have a higher degree of vulnerability. The paper takes northern Vietnam as an example of a developing region. We identify important parameters and discuss their importance for flood strategy formulations. Based on the policy strategies in the Tisza case, we extract data from the strategies and propose a framework for loss spread in developing and emerging economies. The parameter set can straightforwardly be included in a simulation and decision model for policy formulation and evaluation, taking multiple stakeholders into account.

  18. Modular Approach to Structural Simulation for Vehicle Crashworthiness Prediction

    DOT National Transportation Integrated Search

    1975-03-01

    A modular formulation for simulation of the structural deformation and deceleration of a vehicle for crashworthiness and collision compatibility is presented. This formulation includes three dimensional beam elements, various spring elements, rigid b...

  19. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  20. Case-control analysis of ambulance, emergency room, or inpatient hospital events for epilepsy and antiepileptic drug formulation changes.

    PubMed

    Zachry, Woodie M; Doan, Quynhchau D; Clewell, Jerry D; Smith, Brien J

    2009-03-01

    Although antiepileptic drugs (AEDs) with multisource generic alternatives are becoming more prevalent, no case-control studies have been published examining multisource medication use and epilepsy-related outcomes. This study evaluated the association between inpatient/emergency epilepsy care and the occurrence of a recent switch in AED formulation. A case-control analysis was conducted utilizing the Ingenix LabRx Database. Eligible patients were 12-64 years of age, received >or=145 days of AEDs in the preindex period, had continuous eligibility for 6 months preindex, and no prior inpatient/emergency care. Cases received care between 7/1/2006 and 12/31/2006 in an ambulance, emergency room, or inpatient hospital with a primary epilepsy diagnosis. Controls had a primary epilepsy diagnosis in a physician's office during the same period. The index date was the earliest occurrence of care in each respective setting. Cases and controls were matched 1:3 by epilepsy diagnosis and age. Odds of a switch between "A-rated" AEDs within 6 months prior to index were calculated. Cases (n = 416) had 81% greater odds of having had an A-rated AED formulation switch [odds ratio (OR) = 1.81; 95% confidence interval (CI) = 1.25 to 2.63] relative to controls (n = 1248). There were no significant differences between groups regarding demographics or diagnosis. Significant differences were found with regard to medical coverage type (case Medicaid = 4.6%, control Medicaid = 1.8%, p = 0.002). Post hoc analysis results excluding Medicaid recipients remained significant and concordant with the original analysis. This analysis found an association between patients receiving epilepsy care in an emergency or inpatient setting and the recent occurrence of AED formulation switching involving A-rated generics.

  1. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  2. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  3. A generalized formulation for noise-based seismic velocity change measurements

    NASA Astrophysics Data System (ADS)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    The observation of continuous seismic velocity changes is a powerful tool for detecting seasonal variations in crustal structure, volcanic unrest, co- and post-seismic evolution of stress in fault areas or the effects of fluid injection. The standard approach for measuring such velocity changes relies on comparison of travel times in the coda of a set of seismic signals, usually noise-based cross-correlations retrieved at different dates, and a reference trace, usually a averaged function over dates. A good stability in both space and time of the noise sources is then the main assumption for reliable measurements. Unfortunately, these conditions are often not fulfilled, as it happens when ambient-noise sources are non-stationary, such as the emissions of low-frequency volcanic tremors.We propose a generalized formulation for retrieving continuous time series of noise-based seismic velocity changes without any arbitrary reference cross-correlation function. We set up a general framework for future applications of this technique performing synthetic tests. In particular, we study the reliability of the retrieved velocity changes in case of seasonal-type trends, transient effects (similar to those produced as a result of an earthquake or a volcanic eruption) and sudden velocity drops and recoveries as the effects of transient local source emissions. Finally, we apply this approach to a real dataset of noise cross-correlations. We choose the Klyuchevskoy volcanic group (Kamchatka) as a case study where the recorded wavefield is hampered by loss of data and dominated by strongly localized volcanic tremor sources. Despite the mentioned wavefield contaminations, we retrieve clear seismic velocity drops associated with the eruptions of the Klyuchevskoy an the Tolbachik volcanoes in 2010 and 2012, respectively.

  4. R -matrix-incorporating-time method for H2+ in short and intense laser fields

    NASA Astrophysics Data System (ADS)

    Ó Broin, Cathal; Nikolopoulos, L. A. A.

    2015-12-01

    In this work we develop an approach for a molecular hydrogen ion (H2+ ) in the Born-Oppenheimer approximation while exposed to intense short-pulse radiation. Our starting point is the R -matrix-incorporating-time formulation for atomic hydrogen [L. A. A. Nikolopoulos et al., Phys. Rev. A 78, 063420 (2008), 10.1103/PhysRevA.78.063420], which has proven to be successful at treating multielectron atomic systems efficiently and with a high accuracy [L. R. Moore et al., J. Mod. Opt. 58, 1132 (2011), 10.1080/09500340.2011.559315]. The present study on H2+ is performed with the similar objective of developing an ab initio method for solving the time-dependent Schrödinger equation for multielectron diatomic molecules exposed to an external time-dependent potential field. The theoretical formulation is developed in detail for the molecular hydrogen ion where all the multielectron and internuclei complications are absent. As in the atomic case, the configuration space of the electron's coordinates is separated artificially over two regions: the inner (I) and outer (II) regions. In region I the time-dependent wave function is expanded on the eigenstate basis corresponding to the molecule's Hamiltonian augmented by Bloch operators, while in region II a grid representation is used. We demonstrate the independence of our results from the introduced artificial boundary surface by calculating observables that are directly accessed experimentally and also by showing that gauge-dependent quantities are also invariant with the region I box size. We also compare our results with other theoretical works and emphasize cases where basis-set approaches are currently very computationally expensive or intractable in terms of computational resources.

  5. A Sensitive Microplate Assay for Lipase Activity Measurement Using Olive Oil Emulsion Substrate: Modification of the Copper Soap Colorimetric Method.

    PubMed

    Mustafa, Ahmad; Karmali, Amin; Abdelmoez, Wael

    2016-01-01

    The present work involves a sensitive high-throughput microtiter plate based colorimetric assay for estimating lipase activity using cupric acetate pyridine reagent (CAPR). In the first approach, three factors two levels factorial design methodology was used to evaluate the interactive effect of different parameters on the sensitivity of the assay method. The optimization study revealed that the optimum CAPR concentration was 7.5% w/v, the optimum solvent was heptane and the optimum CAPR pH was 6. In the second approach, the optimized colorimetric microplate assay was used to measure lipase activity based on enzymatic hydrolysis of olive oil emulsion substrate at 37°C and 150 rpm. The emulsion substrates were formulated by using olive oil, triton X-100 (10% v/v in pH 8) and sodium phosphate buffer of pH 8 in ratio of 1:1:1 in the case of Candida sp. lipase. While in the case of immobilized lipozyme RMIM, The emulsion substrates were formulated by using olive oil, triton X-100 (1% v/v in pH 8) and sodium phosphate buffer of pH 8 in ratio of 2:1:1. Absorbance was measured at 655 nm. The stability of this assay (in terms of colored heptane phase absorbance readings) retained more than 92.5% after 24 h at 4°C compared to the absorbance readings measured at zero time. In comparison with other lipase assay methods, beside the developed sensitivity, the reproducibility and the lower limit of detection (LOD) of the proposed method, it permits analyzing of 96 samples at one time in a 96-well microplate. Furthermore, it consumes small quantities of chemicals and unit operations.

  6. Quasi-linear versus potential-based formulations of force-flux relations and the GENERIC for irreversible processes: comparisons and examples

    NASA Astrophysics Data System (ADS)

    Hütter, Markus; Svendsen, Bob

    2013-11-01

    An essential part in modeling out-of-equilibrium dynamics is the formulation of irreversible dynamics. In the latter, the major task consists in specifying the relations between thermodynamic forces and fluxes. In the literature, mainly two distinct approaches are used for the specification of force-flux relations. On the one hand, quasi-linear relations are employed, which are based on the physics of transport processes and fluctuation-dissipation theorems (de Groot and Mazur in Non-equilibrium thermodynamics, North Holland, Amsterdam, 1962, Lifshitz and Pitaevskii in Physical kinetics. Volume 10, Landau and Lifshitz series on theoretical physics, Pergamon Press, Oxford, 1981). On the other hand, force-flux relations are also often represented in potential form with the help of a dissipation potential (Šilhavý in The mechanics and thermodynamics of continuous media, Springer, Berlin, 1997). We address the question of how these two approaches are related. The main result of this presentation states that the class of models formulated by quasi-linear relations is larger than what can be described in a potential-based formulation. While the relation between the two methods is shown in general terms, it is demonstrated also with the help of three examples. The finding that quasi-linear force-flux relations are more general than dissipation-based ones also has ramifications for the general equation for non-equilibrium reversible-irreversible coupling (GENERIC: e.g., Grmela and Öttinger in Phys Rev E 56:6620-6632, 6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005). This framework has been formulated and used in two different forms, namely a quasi-linear (Öttinger and Grmela in Phys Rev E 56:6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005) and a dissipation potential-based (Grmela in Adv Chem Eng 39:75-129, 2010, Grmela in J Non-Newton Fluid Mech 165:980-986, 2010, Mielke in Continuum Mech Therm 23:233-256, 2011) form, respectively, relating the irreversible evolution to the entropy gradient. It is found that also in the case of GENERIC, the quasi-linear representation encompasses a wider class of phenomena as compared to the dissipation-based formulation. Furthermore, it is found that a potential exists for the irreversible part of the GENERIC if and only if one does for the underlying force-flux relations.

  7. Construction of large signaling pathways using an adaptive perturbation approach with phosphoproteomic data.

    PubMed

    Melas, Ioannis N; Mitsos, Alexander; Messinis, Dimitris E; Weiss, Thomas S; Rodriguez, Julio-Saez; Alexopoulos, Leonidas G

    2012-04-01

    Construction of large and cell-specific signaling pathways is essential to understand information processing under normal and pathological conditions. On this front, gene-based approaches offer the advantage of large pathway exploration whereas phosphoproteomic approaches offer a more reliable view of pathway activities but are applicable to small pathway sizes. In this paper, we demonstrate an experimentally adaptive approach to construct large signaling pathways from phosphoproteomic data within a 3-day time frame. Our approach--taking advantage of the fast turnaround time of the xMAP technology--is carried out in four steps: (i) screen optimal pathway inducers, (ii) select the responsive ones, (iii) combine them in a combinatorial fashion to construct a phosphoproteomic dataset, and (iv) optimize a reduced generic pathway via an Integer Linear Programming formulation. As a case study, we uncover novel players and their corresponding pathways in primary human hepatocytes by interrogating the signal transduction downstream of 81 receptors of interest and constructing a detailed model for the responsive part of the network comprising 177 species (of which 14 are measured) and 365 interactions.

  8. Changing Formulations of the Man-Environment Relationship in Anglo-American Geography

    ERIC Educational Resources Information Center

    Jeans, D. N.

    1974-01-01

    The following six formulations of the Man-Environment relationship have held successive favor in Geography since the 1900's: Economic Determinism, Possibilism, Cultural Relativism, the Landscape School, Perception of Environment, and Ecological Approach. (JH)

  9. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  10. A finite element-boundary integral formulation for scattering by three-dimensional cavity-backed apertures

    NASA Technical Reports Server (NTRS)

    Jin, Jian-Ming; Volakis, John L.

    1990-01-01

    A numerical technique is proposed for the electromagnetic characterization of the scattering by a three-dimensional cavity-backed aperture in an infinite ground plane. The technique combines the finite element and boundary integral methods to formulate a system of equations for the solution of the aperture fields and those inside the cavity. Specifically, the finite element method is employed to formulate the fields in the cavity region and the boundary integral approach is used in conjunction with the equivalence principle to represent the fields above the ground plane. Unlike traditional approaches, the proposed technique does not require knowledge of the cavity's Green's function and is, therefore, applicable to arbitrary shape depressions and material fillings. Furthermore, the proposed formulation leads to a system having a partly full and partly sparse as well as symmetric and banded matrix which can be solved efficiently using special algorithms.

  11. Quantum electron-vibrational dynamics at finite temperature: Thermo field dynamics approach

    NASA Astrophysics Data System (ADS)

    Borrelli, Raffaele; Gelin, Maxim F.

    2016-12-01

    Quantum electron-vibrational dynamics in molecular systems at finite temperature is described using an approach based on the thermo field dynamics theory. This formulation treats temperature effects in the Hilbert space without introducing the Liouville space. A comparison with the theoretically equivalent density matrix formulation shows the key numerical advantages of the present approach. The solution of thermo field dynamics equations with a novel technique for the propagation of tensor trains (matrix product states) is discussed. Numerical applications to model spin-boson systems show that the present approach is a promising tool for the description of quantum dynamics of complex molecular systems at finite temperature.

  12. On the Miller-Tucker-Zemlin Based Formulations for the Distance Constrained Vehicle Routing Problems

    NASA Astrophysics Data System (ADS)

    Kara, Imdat

    2010-11-01

    Vehicle Routing Problem (VRP), is an extension of the well known Traveling Salesman Problem (TSP) and has many practical applications in the fields of distribution and logistics. When the VRP consists of distance based constraints it is called Distance Constrained Vehicle Routing Problem (DVRP). However, the literature addressing on the DVRP is scarce. In this paper, existing two-indexed integer programming formulations, having Miller-Tucker-Zemlin based subtour elimination constraints, are reviewed. Existing formulations are simplified and obtained formulation is presented as formulation F1. It is shown that, the distance bounding constraints of the formulation F1, may not generate the distance traveled up to the related node. To do this, we redefine the auxiliary variables of the formulation and propose second formulation F2 with new and easy to use distance bounding constraints. Adaptation of the second formulation to the cases where new restrictions such as minimal distance traveled by each vehicle or other objectives such as minimizing the longest distance traveled is discussed.

  13. Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.

    PubMed

    Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders

    2013-03-12

    In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Case study for a vaccine against leishmaniasis.

    PubMed

    Alvar, Jorge; Croft, Simon L; Kaye, Paul; Khamesipour, Ali; Sundar, Shyam; Reed, Steven G

    2013-04-18

    Leishmaniasis in many ways offers a unique vaccine case study. Two reasons for this are that leishmaniasis is a disease complex caused by several different species of parasite that are highly related, thus raising the possibility of developing a single vaccine to protect against multiple diseases. Another reason is the demonstration that a leishmaniasis vaccine may be used therapeutically as well as prophylactically. Although there is no registered human leishmaniasis vaccine today, immunization approaches using live or killed organisms, as well as defined vaccine candidates, have demonstrated at least some degree of efficacy in humans to prevent and to treat some forms of leishmaniasis, and there is a vigorous pipeline of candidates in development. Current approaches include using individual or combined antigens of the parasite or of salivary gland extract of the parasites' insect vector, administered with or without formulation in adjuvant. Animal data obtained with several vaccine candidates are promising and some have been or will be entered into clinical testing in the near future. There is sufficient scientific and epidemiological justification to continue to invest in the development of vaccines against leishmaniasis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Second-order (2 +1 ) -dimensional anisotropic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bazow, Dennis; Heinz, Ulrich; Strickland, Michael

    2014-11-01

    We present a complete formulation of second-order (2 +1 ) -dimensional anisotropic hydrodynamics. The resulting framework generalizes leading-order anisotropic hydrodynamics by allowing for deviations of the one-particle distribution function from the spheroidal form assumed at leading order. We derive complete second-order equations of motion for the additional terms in the macroscopic currents generated by these deviations from their kinetic definition using a Grad-Israel-Stewart 14-moment ansatz. The result is a set of coupled partial differential equations for the momentum-space anisotropy parameter, effective temperature, the transverse components of the fluid four-velocity, and the viscous tensor components generated by deviations of the distribution from spheroidal form. We then perform a quantitative test of our approach by applying it to the case of one-dimensional boost-invariant expansion in the relaxation time approximation (RTA) in which case it is possible to numerically solve the Boltzmann equation exactly. We demonstrate that the second-order anisotropic hydrodynamics approach provides an excellent approximation to the exact (0+1)-dimensional RTA solution for both small and large values of the shear viscosity.

  16. Idiographic formulations, symbols, narratives, context and meaning.

    PubMed

    Phillips, James

    2005-01-01

    To locate the place of idiographic, narrative formulations in a psychiatric nosology and to address the problems stemming from the absence of such formulations in ICD-10 and DSM-IV, the author begins with a review of the stated goals of DSM-IV: that it should serve clinical, research, educational and information-management purposes. He argues that there is a conflict between the clinical and research goals of both manuals and that, with their emphasis on categorical diagnoses, criteria sets and statistical reliability, they serve the purposes of the biomedically oriented researcher better than those of the clinician. The latter is focused on the individual patient and tends in his diagnostic assessment toward a narrative fleshing out of the particulars of the patient's life and personality. Clinicians do not work with tight criteria sets but rather with a prototypal or ideal-type approach, and they emphasize individual histories, psychodynamic formulations and other kinds of idiographic accounts. If a psychiatric nosology is to serve as a clinically useful instrument, it will have to allow for such formulations. The author then offers a description and definition of idiographic, narrative formulations, along with remarks on the conceptual background to this approach. He concludes by highlighting the work of the workgroup of the World Psychiatric Association in developing a section of their International Guidelines for Diagnostic Assessment entitled 'Idiographic (personalised) Diagnostic Formulation'. Copyright 2005 S. Karger AG, Basel.

  17. Solid effervescent formulations as new approach for topical minoxidil delivery.

    PubMed

    Pereira, Maíra N; Schulte, Heidi L; Duarte, Natane; Lima, Eliana M; Sá-Barreto, Livia L; Gratieri, Tais; Gelfuso, Guilherme M; Cunha-Filho, Marcilio S S

    2017-01-01

    Currently marketed minoxidil formulations present inconveniences that range from a grease hard aspect they leave on the hair to more serious adverse reactions as scalp dryness and irritation. In this paper we propose a novel approach for minoxidil sulphate (MXS) delivery based on a solid effervescent formulation. The aim was to investigate whether the particle mechanical movement triggered by effervescence would lead to higher follicle accumulation. Preformulation studies using thermal, spectroscopic and morphological analysis demonstrated the compatibility between effervescent salts and the drug. The effervescent formulation demonstrated a 2.7-fold increase on MXS accumulation into hair follicles casts compared to the MXS solution (22.0±9.7μg/cm 2 versus 8.3±4.0μg/cm 2 ) and a significant drug increase (around 4-fold) in remaining skin (97.1±29.2μg/cm 2 ) compared to the drug solution (23.5±6.1μg/cm 2 ). The effervescent formulations demonstrated a prominent increase of drug permeation highly dependent on the effervescent mixture concentration in the formulation, confirming the hypothesis of effervescent reaction favoring drug penetration. Clinically, therapy effectiveness could be improved, increasing the administration interval, hence, patient compliance. More studies to investigate the follicular targeting potential and safety of new formulations are needed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Cultural considerations in the diagnosis and treatment of schizophrenia: A case example from India.

    PubMed

    Dhanasekaran, Saranya; Loganathan, Santosh; Dahale, Ajit; Varghese, Mathew

    2017-06-01

    Culture plays an important role in the presentation, help seeking, treatment and outcomes of psychiatric illnesses like schizophrenia. We report a case of paranoid schizophrenia in a 35-year-old lady, from South India, whose clinical presentation was influenced by various sociocultural factors. These cultural constructs were taken into consideration to formulate an acceptable and effective management plan. A detailed case description using a cultural formulation to highlight the etic and emic perspectives and challenges in treatment and management are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Hydrophobic polymers for orodispersible films: a quality by design approach.

    PubMed

    Borges, Ana Filipa; Silva, Branca M A; Silva, Cláudia; Coelho, Jorge F J; Simões, Sérgio

    2016-10-01

    To develop orodispersible films (ODF) based on hydrophobic polymers with higher stability to ordinary environmental humidity conditions without compromising their fast disintegration time. A quality by design approach was applied to screen three different formulations each one based on a different hydrophobic polymer: polyvinyl acetate, methacrylate-based copolymer and shellac. The screening formulations were characterized regarding their mechanical properties, residual water content, disintegration time and appearance, in order to find a suitable ODF formulation according to established critical quality attributes. The selected critical process parameters for the selection of appropriate ODF formulations were the percentage of the different excipients and the plasticizer type. Three hydrophobic-based matrices with fast disintegration were developed. These were generically composed by a hydrophobic polymer, a stabilizer, a disintegrant and a plasticizer. It verified that the common components within the three different formulations behave differently depending on the system/chemical environment that they were included. It was shown that it is possible to develop oral films based on hydrophobic polymers with fast disintegration time, good texture and appearance, breaking a paradigm of the ODF research field.

  20. Autonomous Underwater Vehicle Navigation

    DTIC Science & Technology

    2008-02-01

    three standard deviations are ignored as indicated by the × marker. 25 7. REFERENCES [1] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals...autonomous underwater vehicle with six degrees of freedom. We approach this problem using an error state formulation of the Kalman filter. Integration...each position fix, but is this ad-hoc method optimal? Here, we present an approach using an error state formulation of the Kalman filter to provide an

  1. Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition

    DTIC Science & Technology

    2006-04-05

    classes. In this study we evaluate the effectiveness of one of these formulations that was developed by Crammer and Singer [9], which leads to...significantly more complex model can be learned by directly applying the Crammer -Singer multiclass formulation on the outputs of the binary classifiers...will refer to this as the Crammer -Singer (CS) model. Comparing the scaling approach to the Crammer -Singer approach we can see that the Crammer -Singer

  2. Cyclodextrin Inclusion Complex to Improve Physicochemical Properties of Herbicide Bentazon: Exploring Better Formulations

    PubMed Central

    Yáñez, Claudia; Cañete-Rosales, Paulina; Castillo, Juan Pablo; Catalán, Nicole; Undabeytia, Tomás; Morillo, Esmeralda

    2012-01-01

    The knowledge of the host-guest complexes using cyclodextrins (CDs) has prompted an increase in the development of new formulations. The capacity of these organic host structures of including guest within their hydrophobic cavities, improves physicochemical properties of the guest. In the case of pesticides, several inclusion complexes with cyclodextrins have been reported. However, in order to explore rationally new pesticide formulations, it is essential to know the effect of cyclodextrins on the properties of guest molecules. In this study, the inclusion complexes of bentazon (Btz) with native βCD and two derivatives, 2-hydroxypropyl-β-cyclodextrin (HPCD) and sulfobutylether-β-cyclodextrin (SBECD), were prepared by two methods: kneading and freeze-drying, and their characterization was investigated with different analytical techniques including Fourier transform infrared spectroscopy (FT-IR), differential thermal analysis (DTA), X-ray diffractometry (XRD) and differential pulse voltammetry (DPV). All these approaches indicate that Btz forms inclusion complexes with CDs in solution and in solid state, with a stoichiometry of 1∶1, although some of them are obtained in mixtures with free Btz. The calculated association constant of the Btz/HPCD complex by DPV was 244±19 M−1 being an intermediate value compared with those obtained with βCD and SBECD. The use of CDs significantly increases Btz photostability, and depending on the CDs, decreases the surface tension. The results indicated that bentazon forms inclusion complexes with CDs showing improved physicochemical properties compared to free bentazon indicating that CDs may serve as excipient in herbicide formulations. PMID:22952577

  3. Horsetail matching: a flexible approach to optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  4. Oral matrix tablet formulations for concomitant controlled release of anti-tubercular drugs: design and in vitro evaluations.

    PubMed

    Hiremath, Praveen S; Saha, Ranendra N

    2008-10-01

    The aim of the present investigation was to develop controlled release (C.R.) matrix tablet formulations of rifampicin and isoniazid combination, to study the design parameters and to evaluate in vitro release characteristics. In the present study, a series of formulations were developed with different release rates and duration using hydrophilic polymers hydroxypropyl methylcellulose (HPMC) and hydroxypropyl cellulose (HPC). The duration of rifampicin and isoniazid release could be tailored by varying the polymer type, polymer ratio and processing techniques. Further, Eudragit L100-55 was incorporated in the matrix tablets to compensate for the pH-dependent release of rifampicin. Rifampicin was found to follow linear release profile with time from HPMC formulations. In case of formulations with HPC, there was an initial higher release in simulated gastric fluid (SGF) followed by zero order release profiles in simulated intestinal fluid (SIFsp) for rifampicin. The release of isoniazid was found to be predominantly by diffusion mechanism in case of HPMC formulations, and with HPC formulations release was due to combination of diffusion and erosion. The initial release was sufficiently higher for rifampicin from HPC thus ruling out the need to incorporate a separate loading dose. The initial release was sufficiently higher for isoniazid in all formulations. Thus, with the use of suitable polymer or polymer combinations and with the proper optimization of the processing techniques it was possible to design the C.R. formulations of rifampicin and isoniazid combination that could provide the sufficient initial release and release extension up to 24h for both the drugs despite of the wide variations in their physicochemical properties.

  5. On the numerical modeling of sliding beams: A comparison of different approaches

    NASA Astrophysics Data System (ADS)

    Steinbrecher, Ivo; Humer, Alexander; Vu-Quoc, Loc

    2017-11-01

    The transient analysis of sliding beams represents a challenging problem of structural mechanics. Typically, the sliding motion superimposed by large flexible deformation requires numerical methods as, e.g., finite elements, to obtain approximate solutions. By means of the classical sliding spaghetti problem, the present paper provides a guideline to the numerical modeling with conventional finite element codes. For this purpose, two approaches, one using solid elements and one using beam elements, respectively, are employed in the analysis, and the characteristics of each approach are addressed. The contact formulation realizing the interaction of the beam with its support demands particular attention in the context of sliding structures. Additionally, the paper employs the sliding-beam formulation as a third approach, which avoids the numerical difficulties caused by the large sliding motion through a suitable coordinate transformation. The present paper briefly outlines the theoretical fundamentals of the respective approaches for the modeling of sliding structures and gives a detailed comparison by means of the sliding spaghetti serving as a representative example. The specific advantages and limitations of the different approaches with regard to accuracy and computational efficiency are discussed in detail. Through the comparison, the sliding-beam formulation, which proves as an effective approach for the modeling, can be validated for the general problem of a sliding structure subjected to large deformation.

  6. Validity and reliability of an instrument for assessing case analyses in bioengineering ethics education.

    PubMed

    Goldin, Ilya M; Pinkus, Rosa Lynn; Ashley, Kevin

    2015-06-01

    Assessment in ethics education faces a challenge. From the perspectives of teachers, students, and third-party evaluators like the Accreditation Board for Engineering and Technology and the National Institutes of Health, assessment of student performance is essential. Because of the complexity of ethical case analysis, however, it is difficult to formulate assessment criteria, and to recognize when students fulfill them. Improvement in students' moral reasoning skills can serve as the focus of assessment. In previous work, Rosa Lynn Pinkus and Claire Gloeckner developed a novel instrument for assessing moral reasoning skills in bioengineering ethics. In this paper, we compare that approach to existing assessment techniques, and evaluate its validity and reliability. We find that it is sensitive to knowledge gain and that independent coders agree on how to apply it.

  7. Off-shell single-top production at NLO matched to parton showers

    DOE PAGES

    Frederix, R.; Frixione, S.; Papanastasiou, A. S.; ...

    2016-06-06

    We study the hadroproduction of a Wb pair in association with a light jet, focusing on the dominant t-channel contribution and including exactly at the matrix-element level all non-resonant and off-shell effects induced by the finite top-quark width. Our simulations are accurate to the next-to-leading order in QCD, and are matched to the Herwig6 and Pythia8 parton showers through the MC@NLO method. We present phenomenological results relevant to the 8 TeV LHC, and carry out a thorough comparison to the case of on-shell t-channel single-top production. Furthermore, we formulate our approach so that it can be applied to the generalmore » case of matrix elements that feature coloured intermediate resonances and are matched to parton showers.« less

  8. A first-principles model for orificed hollow cathode operation

    NASA Technical Reports Server (NTRS)

    Salhi, A.; Turchi, P. J.

    1992-01-01

    A theoretical model describing orificed hollow cathode discharge is presented. The approach adopted is based on a purely analytical formulation founded on first principles. The present model predicts the emission surface temperature and plasma properties such as electron temperature, number densities and plasma potential. In general, good agreements between theory and experiment are obtained. Comparison of the results with the available related experimental data shows a maximum difference of 10 percent in emission surface temperature, 20 percent in electron temperature and 35 percent in plasma potential. In case of the variation of the electron number density with the discharge current a maximum discrepancy of 36 percent is obtained. However, in the case of the variation with the cathode internal pressure, the predicted electron number density is higher than the experimental data by a maximum factor of 2.

  9. Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y., E-mail: yu.feng@unsw.edu.au; Lin, S.; Huang, S.

    Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation givesmore » a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.« less

  10. Toward Improved Predictions of Slender Airframe Aerodynamics Using the F-16XL Aircraft

    NASA Technical Reports Server (NTRS)

    Luckring, James M.; Rizzi, Arthur; Davis, M. Bruce

    2016-01-01

    A coordinated project has been underway to improve computational fluid dynamics predictions of slender airframe aerodynamics. The work is focused on two flow conditions and leverages a unique flight data set obtained with an F-16XL aircraft. These conditions, a low-speed high angle-of-attack case and a transonic low angle-of-attack case, were selected from a prior prediction campaign wherein the computational fluid dynamics failed to provide acceptable results. In this paper, the background, objectives, and approach to the current project are presented. The work embodies predictions from multiple numerical formulations that are contributed from multiple organizations, and the context of this campaign to other multicode, multi-organizational efforts is included. The relevance of this body of work toward future supersonic commercial transport concepts is also briefly addressed.

  11. Lasing eigenvalue problems: the electromagnetic modelling of microlasers

    NASA Astrophysics Data System (ADS)

    Benson, Trevor; Nosich, Alexander; Smotrova, Elena; Balaban, Mikhail; Sewell, Phillip

    2007-02-01

    Comprehensive microcavity laser models should account for several physical mechanisms, e.g. carrier transport, heating and optical confinement, coupled by non-linear effects. Nevertheless, considerable useful information can still be obtained if all non-electromagnetic effects are neglected, often within an additional effective-index reduction to an equivalent 2D problem, and the optical modes viewed as solutions of Maxwell's equations. Integral equation (IE) formulations have many advantages over numerical techniques such as FDTD for the study of such microcavity laser problems. The most notable advantages of an IE approach are computational efficiency, the correct description of cavity boundaries without stair-step errors, and the direct solution of an eigenvalue problem rather than the spectral analysis of a transient signal. Boundary IE (BIE) formulations are more economic that volume IE (VIE) ones, because of their lower dimensionality, but they are only applicable to the constant cavity refractive index case. The Muller BIE, being free of 'defect' frequencies and having smooth or integrable kernels, provides a reliable tool for the modal analysis of microcavities. Whilst such an approach can readily identify complex-valued natural frequencies and Q-factors, the lasing condition is not addressed directly. We have thus suggested using a Muller BIE approach to solve a lasing eigenvalue problem (LEP), i.e. a linear eigenvalue solution in the form of two real-valued numbers (lasing wavelength and threshold information) when macroscopic gain is introduced into the cavity material within an active region. Such an approach yields clear insight into the lasing thresholds of individual cavities with uniform and non-uniform gain, cavities coupled as photonic molecules and cavities equipped with one or more quantum dots.

  12. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    PubMed

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  13. Numerical Study of Cattaneo-Christov Heat Flux Model for Viscoelastic Flow Due to an Exponentially Stretching Surface.

    PubMed

    Ahmad Khan, Junaid; Mustafa, M; Hayat, T; Alsaedi, A

    2015-01-01

    This work deals with the flow and heat transfer in upper-convected Maxwell fluid above an exponentially stretching surface. Cattaneo-Christov heat flux model is employed for the formulation of the energy equation. This model can predict the effects of thermal relaxation time on the boundary layer. Similarity approach is utilized to normalize the governing boundary layer equations. Local similarity solutions are achieved by shooting approach together with fourth-fifth-order Runge-Kutta integration technique and Newton's method. Our computations reveal that fluid temperature has inverse relationship with the thermal relaxation time. Further the fluid velocity is a decreasing function of the fluid relaxation time. A comparison of Fourier's law and the Cattaneo-Christov's law is also presented. Present attempt even in the case of Newtonian fluid is not yet available in the literature.

  14. A proposed solution to integrating cognitive-affective neuroscience and neuropsychiatry in psychiatry residency training: The time is now.

    PubMed

    Torous, John; Stern, Adam P; Padmanabhan, Jaya L; Keshavan, Matcheri S; Perez, David L

    2015-10-01

    Despite increasing recognition of the importance of a strong neuroscience and neuropsychiatry education in the training of psychiatry residents, achieving this competency has proven challenging. In this perspective article, we selectively discuss the current state of these educational efforts and outline how using brain-symptom relationships from a systems-level neural circuit approach in clinical formulations may help residents value, understand, and apply cognitive-affective neuroscience based principles towards the care of psychiatric patients. To demonstrate the utility of this model, we present a case of major depressive disorder and discuss suspected abnormal neural circuits and therapeutic implications. A clinical neural systems-level, symptom-based approach to conceptualize mental illness can complement and expand residents' existing psychiatric knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Chiral magnetic effect in the presence of electroweak interactions as a quasiclassical phenomenon

    NASA Astrophysics Data System (ADS)

    Dvornikov, Maxim; Semikoz, Victor B.

    2018-03-01

    We elaborate the quasiclassical approach to obtain the modified chiral magnetic effect (CME) in the case when the massless charged fermions interact with electromagnetic fields and the background matter by the electroweak forces. The derivation of the anomalous current along the external magnetic field involves the study of the energy density evolution of chiral particles in parallel electric and magnetic fields. We consider both the particle acceleration by the external electric field and the contribution of the Adler anomaly. The condition of the validity of this method for the derivation of the CME is formulated. We obtain the expression for the electric current along the external magnetic field, which appears to coincide with our previous results based on the purely quantum approach. Our results are compared with the findings of other authors.

  16. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  17. What's Cooler Than Being Cool? Ice-Sheet Models Using a Fluidity-Based FOSLS Approach to Nonlinear-Stokes Flow

    NASA Astrophysics Data System (ADS)

    Allen, Jeffery M.

    This research involves a few First-Order System Least Squares (FOSLS) formulations of a nonlinear-Stokes flow model for ice sheets. In Glen's flow law, a commonly used constitutive equation for ice rheology, the viscosity becomes infinite as the velocity gradients approach zero. This typically occurs near the ice surface or where there is basal sliding. The computational difficulties associated with the infinite viscosity are often overcome by an arbitrary modification of Glen's law that bounds the maximum viscosity. The FOSLS formulations developed in this thesis are designed to overcome this difficulty. The first FOSLS formulation is just the first-order representation of the standard nonlinear, full-Stokes and is known as the viscosity formulation and suffers from the problem above. To overcome the problem of infinite viscosity, two new formulation exploit the fact that the deviatoric stress, the product of viscosity and strain-rate, approaches zero as the viscosity goes to infinity. Using the deviatoric stress as the basis for a first-order system results in the the basic fluidity system. Augmenting the basic fluidity system with a curl-type equation results in the augmented fluidity system, which is more amenable to the iterative solver, Algebraic MultiGrid (AMG). A Nested Iteration (NI) Newton-FOSLS-AMG approach is used to solve the nonlinear-Stokes problems. Several test problems from the ISMIP set of benchmarks is examined to test the effectiveness of the various formulations. These test show that the viscosity based method is more expensive and less accurate. The basic fluidity system shows optimal finite-element convergence. However, there is not yet an efficient iterative solver for this type of system and this is the topic of future research. Alternatively, AMG performs better on the augmented fluidity system when using specific scaling. Unfortunately, this scaling results in reduced finite-element convergence.

  18. Linear complementarity formulation for 3D frictional sliding problems

    USGS Publications Warehouse

    Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc

    2012-01-01

    Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.

  19. A new approach to the Schrödinger equation with rational potentials

    NASA Astrophysics Data System (ADS)

    Dong, Ming-de; Chu, Jue-Hui

    1984-04-01

    A new analytic theory is established for the Schrödinger equation with a rational potential, including a complete classification of the regular eigenfunctions into three different types, an exact method of obtaining wavefunctions, an explicit formulation of the spectral equation (3 x 3 determinant) etc. All representations are exhibited in a unifying way via function-theoretic methods and therefore given in explicit form, in contrast to the prevailing discussion appealing to perturbation or variation methods or continued-fraction techniques. The irregular eigenfunctions at infinity can be obtained analogously and will be discussed separately as another solvable case for singular potentials.

  20. Dynamical instability of a charged gaseous cylinder

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Mumtaz, Saadia

    2017-10-01

    In this paper, we discuss dynamical instability of a charged dissipative cylinder under radial oscillations. For this purpose, we follow the Eulerian and Lagrangian approaches to evaluate linearized perturbed equation of motion. We formulate perturbed pressure in terms of adiabatic index by applying the conservation of baryon numbers. A variational principle is established to determine characteristic frequencies of oscillation which define stability criteria for a gaseous cylinder. We compute the ranges of radii as well as adiabatic index for both charged and uncharged cases in Newtonian and post-Newtonian limits. We conclude that dynamical instability occurs in the presence of charge if the gaseous cylinder contracts to the radius R*.

  1. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  2. Optimization of beam orientation in radiotherapy using planar geometry

    NASA Astrophysics Data System (ADS)

    Haas, O. C. L.; Burnham, K. J.; Mills, J. A.

    1998-08-01

    This paper proposes a new geometrical formulation of the coplanar beam orientation problem combined with a hybrid multiobjective genetic algorithm. The approach is demonstrated by optimizing the beam orientation in two dimensions, with the objectives being formulated using planar geometry. The traditional formulation of the objectives associated with the organs at risk has been modified to account for the use of complex dose delivery techniques such as beam intensity modulation. The new algorithm attempts to replicate the approach of a treatment planner whilst reducing the amount of computation required. Hybrid genetic search operators have been developed to improve the performance of the genetic algorithm by exploiting problem-specific features. The multiobjective genetic algorithm is formulated around the concept of Pareto optimality which enables the algorithm to search in parallel for different objectives. When the approach is applied without constraining the number of beams, the solution produces an indication of the minimum number of beams required. It is also possible to obtain non-dominated solutions for various numbers of beams, thereby giving the clinicians a choice in terms of the number of beams as well as in the orientation of these beams.

  3. The development of a stable, coated pellet formulation of a water-sensitive drug, a case study: development of a stable core formulation.

    PubMed

    Fitzpatrick, Shaun; Taylor, Scott; Booth, Steven W; Newton, Michael J

    2006-01-01

    A development program has been carried out to provide a stable extrusion/spheronisation pellet formulation for a highly water-soluble drug, sitagliptin, which undergoes a change in physical form on processing and is subject to hydrolytic decomposition. A conventional extrusion/spheronization formulation resulted in significant degradation of the drug. The inclusion of glyceryl monostearate into the formulation was found to reduce the water levels required to such a level that there was no significant degradation of the drug during processing to form pellets. The use of a ram extruder to screen formulations with small quantities minimizes the need for the drug in the formulation-screening process, and the results from this method of extrusion were found to be translatable to the use of a screen extruder, which allowed scale-up of the process.

  4. Real-Time Wing-Vortex and Pressure Distribution Estimation on Wings Via Displacements and Strains in Unsteady and Transitional Flight Conditions

    DTIC Science & Technology

    2016-09-07

    approach in co simulation with fluid-dynamics solvers is used. An original variational formulation is developed for the inverse problem of...by the inverse solution meshing. The same approach is used to map the structural and fluid interface kinematics and loads during the fluid structure...co-simulation. The inverse analysis is verified by reconstructing the deformed solution obtained with a corresponding direct formulation, based on

  5. Theoretical orientations in environmental planning: An inquiry into alternative approaches

    NASA Astrophysics Data System (ADS)

    Briassoulis, Helen

    1989-07-01

    In the process of devising courses of action to resolve problems arising at the society-environment interface, a variety of planning approaches are followed, whose adoption is influenced by—among other things—the characteristics of environmental problems, the nature of the decision-making context, and the intellectual traditions of the disciplines contributing to the study of these problems. This article provides a systematic analysis of six alternative environmental planning approaches—comprehensive/rational, incremental, adaptive, contingency, advocacy, and participatory/consensual. The relative influence of the abovementioned factors is examined, the occurrence of these approaches in real-world situations is noted, and their environmental soundness and political realism is evaluated. Because of the disparity between plan formulation and implementation and between theoretical form and empirical reality, a synthetic view of environmental planning approaches is taken and approaches in action are identified, which characterize the totality of the planning process from problem definition to plan implementation, as well as approaches in the becoming, which may be on the horizon of environmental planning of tomorrow. The suggested future research directions include case studies to verify and detail the presence of the approaches discussed, developing measures of success of a given approach in a given decision setting, and an intertemporal analysis of environmental planning approaches.

  6. A retrospective analysis of in vivo eye irritation, skin irritation and skin sensitisation studies with agrochemical formulations: Setting the scene for development of alternative strategies.

    PubMed

    Corvaro, M; Gehen, S; Andrews, K; Chatfield, R; Macleod, F; Mehta, J

    2017-10-01

    Analysis of the prevalence of health effects in large scale databases is key in defining testing strategies within the context of Integrated Approaches on Testing and Assessment (IATA), and is relevant to drive policy changes in existing regulatory toxicology frameworks towards non-animal approaches. A retrospective analysis of existing results from in vivo skin irritation, eye irritation, and skin sensitisation studies on a database of 223 agrochemical formulations is herein published. For skin or eye effects, high prevalence of mild to non-irritant formulations (i.e. per GHS, CLP or EPA classification) would generally suggest a bottom-up approach. Severity of erythema or corneal opacity, for skinor eye effects respectively, were the key drivers for classification, consistent with existing literature. The reciprocal predictivity of skin versus eye irritation and the good negative predictivity of the GHS additivity calculation approach (>85%) provided valuable non-testing evidence for irritation endpoints. For dermal sensitisation, concordance on data from three different methods confirmed the high false negative rate for the Buehler method in this product class. These results have been reviewed together with existing literature on the use of in vitro alternatives for agrochemical formulations, to propose improvements to current regulatory strategies and to identify further research needs. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Release behavior and bioefficacy of imazethapyr formulations based on biopolymeric hydrogels.

    PubMed

    Kumar, Vikas; Singh, Anupama; Das, T K; Sarkar, Dhruba Jyoti; Singh, Shashi Bala; Dhaka, Rashmi; Kumar, Anil

    2017-06-03

    Controlled release formulations of imazethapyr herbicide have been developed employing guar gum-g-cl-polyacrylate/bentonite clay hydrogel composite (GG-HG) and guar gum-g-cl-PNIPAm nano hydrogel (GG-NHG) as carriers, to assess the suitability of biopolymeric hydrogels as controlled herbicide release devices. The kinetics of imazethapyr release from the developed formulations was studied in water and it revealed that the developed formulations of imazethapyr behaved as slow release formulations as compared to commercial formulation. The calculated diffusion exponent (n) values showed that Fickian diffusion was the predominant mechanism of imazethapyr release from the developed formulations. Time for release of half of the loaded imazethapyr (t 1/2 ) ranged between 0.06 and 4.8 days in case of GG-NHG and 4.4 and 12.6 days for the GG-HG formulations. Weed control index (WCI) of GG-HG and GG-NHG formulations was similar to that of the commercial formulation and the herbicidal effect was observed for relatively longer period. Guar gum-based biopolymeric hydrogels in both macro and nano particle size range can serve as potential carriers in developing slow release herbicide formulations.

  8. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.

    2014-11-01

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.

  9. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  10. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minier, Jean-Pierre, E-mail: Jean-Pierre.Minier@edf.fr; Chibbaro, Sergio; Pope, Stephen B.

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangianmore » stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future developments can be safely built, which is also relevant for stochastic subgrid models for particle-laden flows in the context of Large Eddy Simulations.« less

  11. Monte Carlo grain growth modeling with local temperature gradients

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.

    2017-09-01

    This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.

  12. Reflective action assessment with a prospective clinical problem solving tool in the context of rehabilitation medicine: an illustrative case study.

    PubMed

    Kellett, David; Mpofu, Elias; Madden, Richard

    2013-06-01

    This study describes a case formulation approach applying a prospective ICF derived clinical tool to assess rehabilitation needs for a community dwelling stroke survivor with care from an outpatient rehabilitation medicine clinic. Case history data on the person were assessed for rehabilitation management planning using a prospective tool to interlink current with projected future functional status in everyday settings. Implicit assessment with reflective action informed decision points at each stage of the rehabilitation process. As a result of reflective action using the prospective tool, rehabilitation management led to significant changes in client participation after limitations to mobility and self care were mapped to the living conditions of the stroke survivor. The context sensitive rehabilitative plan resulted in higher subjective health-related quality of life in the stroke survivor and significant other and enhanced their capacity for participation. Reflective action informed assessment applying ICF concepts to clinical problem solving resulted in positive gains in health-related quality of life in a stroke survivor.

  13. Theory of signs and statistical approach to big data in assessing the relevance of clinical biomarkers of inflammation and oxidative stress.

    PubMed

    Ghezzi, Pietro; Davies, Kevin; Delaney, Aidan; Floridi, Luciano

    2018-03-06

    Biomarkers are widely used not only as prognostic or diagnostic indicators, or as surrogate markers of disease in clinical trials, but also to formulate theories of pathogenesis. We identify two problems in the use of biomarkers in mechanistic studies. The first problem arises in the case of multifactorial diseases, where different combinations of multiple causes result in patient heterogeneity. The second problem arises when a pathogenic mediator is difficult to measure. This is the case of the oxidative stress (OS) theory of disease, where the causal components are reactive oxygen species (ROS) that have very short half-lives. In this case, it is usual to measure the traces left by the reaction of ROS with biological molecules, rather than the ROS themselves. Borrowing from the philosophical theories of signs, we look at the different facets of biomarkers and discuss their different value and meaning in multifactorial diseases and system medicine to inform their use in patient stratification in personalized medicine.

  14. Forensic case formulation: theoretical, ethical and practical issues.

    PubMed

    Davies, Jason; Black, Susie; Bentley, Natalie; Nagi, Claire

    2013-10-01

    Forensic case formulation, of increasing interest to practitioners and researchers raises many ethical, theoretical and practical issues for them. Systemic, contextual and individual factors which need to be considered include the multitude of staff often involved with any one individual, the pressure to 'get it right' because of the range of risk implications that are associated with individuals within forensic mental health settings, and individual parameters, for example reluctance to be engaged with services. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Adjustment of Jacobs' formulation to the case of Mercury

    NASA Astrophysics Data System (ADS)

    Chiappini, M.; de Santis, A.

    1991-04-01

    Magnetic investigations play an important role in studies on the constitution of planetary interiors. One of these techniques (the so-called Jacobs' formulation), appropriately modified, has been applied to the case of Mercury. According to the results found, the planet, supposed to be divided internally as the earth (crust-mantle-core), would have a core/planet volume ratio of 28 percent, much greater than the earth's core percentage (16 percent). This result is in agreement with previous work which used other independent methods.

  16. Thermoplasticity of coupled bodies in the case of stress-dependent heat transfer

    NASA Technical Reports Server (NTRS)

    Kilikovskaya, O. A.

    1987-01-01

    The problem of the thermal stresses in coupled deformable bodies is formulated for the case where the heat-transfer coefficient at the common boundary depends on the stress-strain state of the bodies (e.g., is a function of the normal pressure at the common boundary). Several one-dimensional problems are solved in this formulation. Among these problems is the determination of the thermal stresses in an n-layer plate and in a two-layer cylinder.

  17. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  18. Magnetic Resonance Imaging to Visualize Disintegration of Oral Formulations.

    PubMed

    Curley, Louise; Hinton, Jordan; Marjoribanks, Cameron; Mirjalili, Ali; Kennedy, Julia; Svirskis, Darren

    2017-03-01

    This article demonstrates that magnetic resonance imaging can visualize the disintegration of a variety of paracetamol containing oral formulations in an in vitro setting and in vivo in the human stomach. The different formulations had unique disintegration profiles which could be imaged both in vitro and in vivo. No special formulation approaches or other contrast agents were required. These data demonstrate the potential for further use of magnetic resonance imaging to investigate and understand the disintegration behavior of different formulation types in vivo, and could potentially be used as a teaching tool in pharmaceutical and medical curricula. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  19. Controversy or consensus? Recommendations for psychiatrists on psychiatry, religion and spirituality.

    PubMed

    Verhagen, Peter J

    2012-12-01

    Although there is still a lot of controversy surrounding the debates on religion and psychiatry, working toward consensus based on clinical experience and research seems to be far more fruitful. DISCOURSE: The main idea in this contribution runs as follows. It is no longer appropriate to treat psychiatry and religion as enemies. It is argued here that they are in fact allies. This position is elucidated in the light of two statements. (1) The World Psychiatric Association, indeed representing world psychiatry, needs to change its position toward religion and psychiatry. It should do so by crossing narrow-minded scientific boundaries like reductionist and materialistic boundaries. (2) Science and religion should not be regarded as opposing adversaries against each other, but as allies against nonsense and superstition. Two recommendations are formulated. First, science-and-religion, and in our case psychiatry-and-religion, is not purely about description based on gathering evidence, systematic empirical testing and mathematical modeling. We need an approach of both descriptive and prescriptive aspects of our daily reality, not only how our world is, but also how it should be. Secondly, science-and-religion, in our case psychiatry-and-religion as allies should formulate sensible criteria and develop an appropriate attitude to discernment based on intellectual, moral and spiritual sincerity. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipin, V. V.; Sokoloff, D. D.; Zhang, H.

    It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less

Top