Sample records for constructive analytical approach

  1. Value of Flexibility - Phase 1

    DTIC Science & Technology

    2010-09-25

    weaknesses of each approach. During this period, we also explored the development of an analytical framework based on sound mathematical constructs... mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory or guidance on best approaches to...research activities is in developing a coherent value based definition of flexibility that is based on an analytical framework that is mathematically

  2. Development of an Analytical Method for Dibutyl Phthalate Determination Using Surrogate Analyte Approach

    PubMed Central

    Farzanehfar, Vahid; Faizi, Mehrdad; Naderi, Nima; Kobarfard, Farzad

    2017-01-01

    Dibutyl phthalate (DBP) is a phthalic acid ester and is widely used in polymeric products to make them more flexible. DBP is found in almost every plastic material and is believed to be persistent in the environment. Various analytical methods have been used to measure DBP in different matrices. Considering the ubiquitous nature of DBP, the most important challenge in DBP analyses is the contamination of even analytical grade organic solvents with this compound and lack of availability of a true blank matrix to construct the calibration line. Standard addition method or using artificial matrices reduce the precision and accuracy of the results. In this study a surrogate analyte approach that is based on using deuterium labeled analyte (DBP-d4) to construct the calibration line was applied to determine DBP in hexane samples. PMID:28496469

  3. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Joseph v. Brady: Synthesis Reunites What Analysis Has Divided

    ERIC Educational Resources Information Center

    Thompson, Travis

    2012-01-01

    Joseph V. Brady (1922-2011) created behavior-analytic neuroscience and the analytic framework for understanding how the external and internal neurobiological environments and mechanisms interact. Brady's approach offered synthesis as well as analysis. He embraced Findley's approach to constructing multioperant behavioral repertoires that found…

  5. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  6. Authors’ response: what are emotions and how are they created in the brain?

    PubMed

    Lindquist, Kristen A; Wager, Tor D; Bliss-Moreau, Eliza; Kober, Hedy; Barret, Lisa Feldman

    2012-06-01

    In our response, we clarify important theoretical differences between basic emotion and psychological construction approaches. We evaluate the empirical status of the basic emotion approach, addressing whether it requires brain localization, whether localization can be observed with better analytic tools, and whether evidence for basic emotions exists in other types of measures. We then revisit the issue of whether the key hypotheses of psychological construction are supported by our meta-analytic findings. We close by elaborating on commentator suggestions for future research.

  7. Proceedings of the Workshop on Change of Representation and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Lowry, Michael R.

    1992-01-01

    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning.

  8. An Analysis of Machine- and Human-Analytics in Classification.

    PubMed

    Tam, Gary K L; Kothari, Vivek; Chen, Min

    2017-01-01

    In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

  9. Factor-Analytic and Individualized Approaches to Constructing Brief Measures of ADHD Behaviors

    ERIC Educational Resources Information Center

    Volpe, Robert J.; Gadow, Kenneth D.; Blom-Hoffman, Jessica; Feinberg, Adam B.

    2009-01-01

    Two studies were performed to examine a factor-analytic and an individualized approach to creating short progress-monitoring measures from the longer "ADHD-Symptom Checklist-4" (ADHD-SC4). In Study 1, teacher ratings on items of the ADHD:Inattentive (IA) and ADHD:Hyperactive-Impulsive (HI) scales of the ADHD-SC4 were factor analyzed in a normative…

  10. Patterns of Work and Family Involvement among Single and Dual Earner Couples: Two Competing Analytical Approaches.

    ERIC Educational Resources Information Center

    Yogev, Sara; Brett, Jeanne

    This paper offers a conceptual framework for the intersection of work and family roles based on the constructs of work involvement and family involvement. The theoretical and empirical literature on the intersection of work and family roles is reviewed from two analytical approaches. From the individual level of analysis, the literature reviewed…

  11. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  12. Rhetorical Construction of Cells in Science and in a Science Classroom.

    ERIC Educational Resources Information Center

    Tsatsarelis, Charalampos; Ogborn, Jon; Jewitt, Carey; Kress, Gunther

    2001-01-01

    Discusses the process of the construction of entities following a social semiotic approach that enables the use of new analytical tools and describes the rhetoric used in construction. Based on an analysis of the historical formation of the notion of cells by scientists, and analysis of a lesson on the microscopic observation of onion cells.…

  13. Constructing Programming Tests from an Item Pool: Pushing the Limits of Student Knowledge Using Assessment and Learning Analytics

    ERIC Educational Resources Information Center

    Ivancevic, Vladimir

    2014-01-01

    Tests targeting the upper limits of student ability could aid students in their learning. This article gives an overview of an approach to the construction of such tests in programming, together with ideas on how to implement and refine them within a learning management system.

  14. Integrating Developmental Theory and Methodology: Using Derivatives to Articulate Change Theories, Models, and Inferences

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Nicholson, Jody; Kouros, Chrystyna; Little, Todd D.; Garber, Judy

    2015-01-01

    Matching theories about growth, development, and change to appropriate statistical models can present a challenge, which can result in misuse, misinterpretation, and underutilization of different analytical approaches. We discuss the use of "derivatives": the change of a construct with respect to the change in another construct.…

  15. Access to and Accessibility of Education: An Analytic and Conceptual Approach to a Multidimensional Issue

    ERIC Educational Resources Information Center

    Stauber, Barbara; Parreira do Amaral, Marcelo

    2015-01-01

    This article presents analytical considerations for the discussion of issues of access to education and inequality. It first sharpens the concept of access and inequality by pointing to the interplay of structure and agency as well as to processes of social differentiation in which differences are constructed. This implies a critical view on…

  16. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  17. Analytical approximation for the Einstein-dilaton-Gauss-Bonnet black hole metric

    NASA Astrophysics Data System (ADS)

    Kokkotas, K. D.; Konoplya, R. A.; Zhidenko, A.

    2017-09-01

    We construct an analytical approximation for the numerical black hole metric of P. Kanti et al. [Phys. Rev. D 54, 5049 (1996), 10.1103/PhysRevD.54.5049] in the four-dimensional Einstein-dilaton-Gauss-Bonnet (EdGB) theory. The continued fraction expansion in terms of a compactified radial coordinate, used here, converges slowly when the dilaton coupling approaches its extremal values, but for a black hole far from the extremal state, the analytical formula has a maximal relative error of a fraction of one percent already within the third order of the continued fraction expansion. The suggested analytical representation of the numerical black hole metric is relatively compact and a good approximation in the whole space outside the black hole event horizon. Therefore, it can serve in the same way as an exact solution when analyzing particles' motion, perturbations, quasinormal modes, Hawking radiation, accreting disks, and many other problems in the vicinity of a black hole. In addition, we construct the approximate analytical expression for the dilaton field.

  18. EFL Teachers' Identity (Re)Construction as Teachers of Intercultural Competence: A Language Socialization Approach

    ERIC Educational Resources Information Center

    Ortaçtepe, Deniz

    2015-01-01

    Adapting Norton's (2000) notion of investment as an analytical lens along with thematic analysis, this longitudinal/narrative inquiry explores how 2 EFL teachers' language socialization in the United States resulted in an identity (re)construction as teachers of intercultural competence. Baris and Serkan's language socialization in the United…

  19. Learning and cognitive styles in web-based learning: theory, evidence, and application.

    PubMed

    Cook, David A

    2005-03-01

    Cognitive and learning styles (CLS) have long been investigated as a basis to adapt instruction and enhance learning. Web-based learning (WBL) can reach large, heterogenous audiences, and adaptation to CLS may increase its effectiveness. Adaptation is only useful if some learners (with a defined trait) do better with one method and other learners (with a complementary trait) do better with another method (aptitude-treatment interaction). A comprehensive search of health professions education literature found 12 articles on CLS in computer-assisted learning and WBL. Because so few reports were found, research from non-medical education was also included. Among all the reports, four CLS predominated. Each CLS construct was used to predict relationships between CLS and WBL. Evidence was then reviewed to support or refute these predictions. The wholist-analytic construct shows consistent aptitude-treatment interactions consonant with predictions (wholists need structure, a broad-before-deep approach, and social interaction, while analytics need less structure and a deep-before-broad approach). Limited evidence for the active-reflective construct suggests aptitude-treatment interaction, with active learners doing better with interactive learning and reflective learners doing better with methods to promote reflection. As predicted, no consistent interaction between the concrete-abstract construct and computer format was found, but one study suggests that there is interaction with instructional method. Contrary to predictions, no interaction was found for the verbal-imager construct. Teachers developing WBL activities should consider assessing and adapting to accommodate learners defined by the wholist-analytic and active-reflective constructs. Other adaptations should be considered experimental. Further WBL research could clarify the feasibility and effectiveness of assessing and adapting to CLS.

  20. Analyte-driven switching of DNA charge transport: de novo creation of electronic sensors for an early lung cancer biomarker.

    PubMed

    Thomas, Jason M; Chakraborty, Banani; Sen, Dipankar; Yu, Hua-Zhong

    2012-08-22

    A general approach is described for the de novo design and construction of aptamer-based electrochemical biosensors, for potentially any analyte of interest (ranging from small ligands to biological macromolecules). As a demonstration of the approach, we report the rapid development of a made-to-order electronic sensor for a newly reported early biomarker for lung cancer (CTAP III/NAP2). The steps include the in vitro selection and characterization of DNA aptamer sequences, design and biochemical testing of wholly DNA sensor constructs, and translation to a functional electrode-bound sensor format. The working principle of this distinct class of electronic biosensors is the enhancement of DNA-mediated charge transport in response to analyte binding. We first verify such analyte-responsive charge transport switching in solution, using biochemical methods; successful sensor variants were then immobilized on gold electrodes. We show that using these sensor-modified electrodes, CTAP III/NAP2 can be detected with both high specificity and sensitivity (K(d) ~1 nM) through a direct electrochemical reading. To investigate the underlying basis of analyte binding-induced conductivity switching, we carried out Förster Resonance Energy Transfer (FRET) experiments. The FRET data establish that analyte binding-induced conductivity switching in these sensors results from very subtle structural/conformational changes, rather than large scale, global folding events. The implications of this finding are discussed with respect to possible charge transport switching mechanisms in electrode-bound sensors. Overall, the approach we describe here represents a unique design principle for aptamer-based electrochemical sensors; its application should enable rapid, on-demand access to a class of portable biosensors that offer robust, inexpensive, and operationally simplified alternatives to conventional antibody-based immunoassays.

  1. The Schema Axiom as Foundation of a Theory for Measurement and Representation of Consciousness. No. 38.

    ERIC Educational Resources Information Center

    Bierschenk, Bernhard

    In this study, the Kantian schema has been applied to natural language expression. The novelty of the approach concerns the way in which the Kantian schema interrelates the analytic with the synthetic mode in the construction of the presented formalism. The main thesis is based on the premise that the synthetic, in contrast to the analytic,…

  2. ZnO-Based Amperometric Enzyme Biosensors

    PubMed Central

    Zhao, Zhiwei; Lei, Wei; Zhang, Xiaobing; Wang, Baoping; Jiang, Helong

    2010-01-01

    Nanostructured ZnO with its unique properties could provide a suitable microenvironment for immobilization of enzymes while retaining their biological activity, and thus lead to an expanded use of this nanomaterial for the construction of electrochemical biosensors with enhanced analytical performance. ZnO-based enzyme electrochemical biosensors are summarized in several tables for an easy overview according to the target biosensing analyte (glucose, hydrogen peroxide, phenol and cholesterol), respectively. Moreover, recent developments in enzyme electrochemical biosensors based on ZnO nanomaterials are reviewed with an emphasis on the fabrications and features of ZnO, approaches for biosensor construction (e.g., modified electrodes and enzyme immobilization) and biosensor performances. PMID:22205864

  3. Measuring Prices in Health Care Markets Using Commercial Claims Data.

    PubMed

    Neprash, Hannah T; Wallace, Jacob; Chernew, Michael E; McWilliams, J Michael

    2015-12-01

    To compare methods of price measurement in health care markets. Truven Health Analytics MarketScan commercial claims. We constructed medical prices indices using three approaches: (1) a "sentinel" service approach based on a single common service in a specific clinical domain, (2) a market basket approach, and (3) a spending decomposition approach. We constructed indices at the Metropolitan Statistical Area level and estimated correlations between and within them. Price indices using a spending decomposition approach were strongly and positively correlated with indices constructed from broad market baskets of common services (r > 0.95). Prices of single common services exhibited weak to moderate correlations with each other and other measures. Market-level price measures that reflect broad sets of services are likely to rank markets similarly. Price indices relying on individual sentinel services may be more appropriate for examining specialty- or service-specific drivers of prices. © Health Research and Educational Trust.

  4. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  5. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  6. Current Trends in Nanomaterial-Based Amperometric Biosensors

    PubMed Central

    Hayat, Akhtar; Catanante, Gaëlle; Marty, Jean Louis

    2014-01-01

    The last decade has witnessed an intensive research effort in the field of electrochemical sensors, with a particular focus on the design of amperometric biosensors for diverse analytical applications. In this context, nanomaterial integration in the construction of amperometric biosensors may constitute one of the most exciting approaches. The attractive properties of nanomaterials have paved the way for the design of a wide variety of biosensors based on various electrochemical detection methods to enhance the analytical characteristics. However, most of these nanostructured materials are not explored in the design of amperometric biosensors. This review aims to provide insight into the diverse properties of nanomaterials that can be possibly explored in the construction of amperometric biosensors. PMID:25494347

  7. Force 2025 and Beyond Strategic Force Design Analytic Model

    DTIC Science & Technology

    2017-01-12

    depiction of the core ideas of our force design model. Figure 1: Description of Force Design Model Figure 2 shows an overview of our methodology ...the F2025B Force Design Analytic Model research conducted by TRAC- MTRY and the Naval Postgraduate School. Our research develops a methodology for...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We

  8. Feature construction can improve diagnostic criteria for high-dimensional metabolic data in newborn screening for medium-chain acyl-CoA dehydrogenase deficiency.

    PubMed

    Ho, Sirikit; Lukacs, Zoltan; Hoffmann, Georg F; Lindner, Martin; Wetter, Thomas

    2007-07-01

    In newborn screening with tandem mass spectrometry, multiple intermediary metabolites are quantified in a single analytical run for the diagnosis of fatty-acid oxidation disorders, organic acidurias, and aminoacidurias. Published diagnostic criteria for these disorders normally incorporate a primary metabolic marker combined with secondary markers, often analyte ratios, for which the markers have been chosen to reflect metabolic pathway deviations. We applied a procedure to extract new markers and diagnostic criteria for newborn screening to the data of newborns with confirmed medium-chain acyl-CoA dehydrogenase deficiency (MCADD) and a control group from the newborn screening program, Heidelberg, Germany. We validated the results with external data of the screening center in Hamburg, Germany. We extracted new markers by performing a systematic search for analyte combinations (features) with high discriminatory performance for MCADD. To select feature thresholds, we applied automated procedures to separate controls and cases on the basis of the feature values. Finally, we built classifiers from these new markers to serve as diagnostic criteria in screening for MCADD. On the basis of chi(2) scores, we identified approximately 800 of >628,000 new analyte combinations with superior discriminatory performance compared with the best published combinations. Classifiers built with the new features achieved diagnostic sensitivities and specificities approaching 100%. Feature construction methods provide ways to disclose information hidden in the set of measured analytes. Other diagnostic tasks based on high-dimensional metabolic data might also profit from this approach.

  9. Narrating practice: reflective accounts and the textual construction of reality.

    PubMed

    Taylor, Carolyn

    2003-05-01

    Two approaches dominate current thinking in health and welfare: evidence-based practice and reflective practice. Whilst there is debate about the merits of evidence-based practice, reflective practice is generally accepted with critical debate as an important educational tool. Where critique does exist it tends to adopt a Foucauldian approach, focusing on the surveillance and self-regulatory aspects of reflective practice. This article acknowledges the critical purchase on the concept of reflective practice offered by Foucauldian approaches but argues that microsociological and discourse analytic approaches can further illuminate the subject and thus serve as a complement to them. The claims of proponents of reflective practice are explored, in opposition to the technical-rational approach of evidence-based practice. Reflective practice tends to adopt a naive or romantic realist position and fails to acknowledge the ways in which reflective accounts construct the world of practice. Microsociological approaches can help us to understand reflective accounts as examples of case-talk, constructed in a narrative form in the same way as case records and presentations.

  10. Multimodal system planning technique : an analytical approach to peak period operation

    DOT National Transportation Integrated Search

    1995-11-01

    The multimodal system planning technique described in this report is an improvement of the methodology used in the Dallas System Planning Study. The technique includes a spreadsheet-based process to identify the costs of congestion, construction, and...

  11. Advanced Nonlinear Latent Variable Modeling: Distribution Analytic LMS and QML Estimators of Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Kelava, Augustin; Werner, Christina S.; Schermelleh-Engel, Karin; Moosbrugger, Helfried; Zapf, Dieter; Ma, Yue; Cham, Heining; Aiken, Leona S.; West, Stephen G.

    2011-01-01

    Interaction and quadratic effects in latent variable models have to date only rarely been tested in practice. Traditional product indicator approaches need to create product indicators (e.g., x[superscript 2] [subscript 1], x[subscript 1]x[subscript 4]) to serve as indicators of each nonlinear latent construct. These approaches require the use of…

  12. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  13. Interactively Open Autonomy Unifies Two Approaches to Function

    NASA Astrophysics Data System (ADS)

    Collier, John

    2004-08-01

    Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.

  14. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  15. Analytical Sociology: A Bungean Appreciation

    NASA Astrophysics Data System (ADS)

    Wan, Poe Yu-ze

    2012-10-01

    Analytical sociology, an intellectual project that has garnered considerable attention across a variety of disciplines in recent years, aims to explain complex social processes by dissecting them, accentuating their most important constituent parts, and constructing appropriate models to understand the emergence of what is observed. To achieve this goal, analytical sociologists demonstrate an unequivocal focus on the mechanism-based explanation grounded in action theory. In this article I attempt a critical appreciation of analytical sociology from the perspective of Mario Bunge's philosophical system, which I characterize as emergentist systemism. I submit that while the principles of analytical sociology and those of Bunge's approach share a lot in common, the latter brings to the fore the ontological status and explanatory importance of supra-individual actors (as concrete systems endowed with emergent causal powers) and macro-social mechanisms (as processes unfolding in and among social systems), and therefore it does not stipulate that every causal explanation of social facts has to include explicit references to individual-level actors and mechanisms. In this sense, Bunge's approach provides a reasonable middle course between the Scylla of sociological reification and the Charybdis of ontological individualism, and thus serves as an antidote to the untenable "strong program of microfoundations" to which some analytical sociologists are committed.

  16. Tensions in Distributed Leadership

    ERIC Educational Resources Information Center

    Ho, Jeanne; Ng, David

    2017-01-01

    Purpose: This article proposes the utility of using activity theory as an analytical lens to examine the theoretical construct of distributed leadership, specifically to illuminate tensions encountered by leaders and how they resolved these tensions. Research Method: The study adopted the naturalistic inquiry approach of a case study of an…

  17. 1. On note taking.

    PubMed

    Plaut, Alfred B J

    2005-02-01

    In this paper the author explores the theoretical and technical issues relating to taking notes of analytic sessions, using an introspective approach. The paper discusses the lack of a consistent approach to note taking amongst analysts and sets out to demonstrate that systematic note taking can be helpful to the analyst. The author describes his discovery that an initial phase where as much data was recorded as possible did not prove to be reliably helpful in clinical work and initially actively interfered with recall in subsequent sessions. The impact of the nature of the analytic session itself and the focus of the analyst's interest on recall is discussed. The author then describes how he modified his note taking technique to classify information from sessions into four categories which enabled the analyst to select which information to record in notes. The characteristics of memory and its constructive nature are discussed in relation to the problems that arise in making accurate notes of analytic sessions.

  18. Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach

    ERIC Educational Resources Information Center

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…

  19. An Application of Latent Variable Structural Equation Modeling for Experimental Research in Educational Technology

    ERIC Educational Resources Information Center

    Lee, Hyeon Woo

    2011-01-01

    As the technology-enriched learning environments and theoretical constructs involved in instructional design become more sophisticated and complex, a need arises for equally sophisticated analytic methods to research these environments, theories, and models. Thus, this paper illustrates a comprehensive approach for analyzing data arising from…

  20. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  1. A General Critical Discourse Analysis Framework for Educational Research

    ERIC Educational Resources Information Center

    Mullet, Dianna R.

    2018-01-01

    Critical discourse analysis (CDA) is a qualitative analytical approach for critically describing, interpreting, and explaining the ways in which discourses construct, maintain, and legitimize social inequalities. CDA rests on the notion that the way we use language is purposeful, regardless of whether discursive choices are conscious or…

  2. Acoustic emission source location in composite structure by Voronoi construction using geodesic curve evolution.

    PubMed

    Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S

    2009-11-01

    Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.

  3. Evaluation of the Current Status of the Combinatorial Approach for the Study of Phase Diagrams

    PubMed Central

    Wong-Ng, W.

    2012-01-01

    This paper provides an evaluation of the effectiveness of using the high throughput combinatorial approach for preparing phase diagrams of thin film and bulk materials. Our evaluation is based primarily on examples of combinatorial phase diagrams that have been reported in the literature as well as based on our own laboratory experiments. Various factors that affect the construction of these phase diagrams are examined. Instrumentation and analytical approaches needed to improve data acquisition and data analysis are summarized. PMID:26900530

  4. Approximate analytical solutions in the analysis of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.

  5. Promotional Discourse in the Websites of Two Australian Universities: A Discourse Analytic Approach

    ERIC Educational Resources Information Center

    Hoang, Thi Van Yen; Rojas-Lizana, Isolda

    2015-01-01

    This article shows how universities represent themselves through the use of language on their institutional websites. Specifically, it compares and contrasts how a long established university, the University of Melbourne and a young university, Macquarie University construct their institutional identities and build up a relationship with potential…

  6. Construction of RFIF using VVSFs with application

    NASA Astrophysics Data System (ADS)

    Katiyar, Kuldip; Prasad, Bhagwati

    2017-10-01

    A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.

  7. A Multi-Method Multi-Analytic Approach to Establishing Internal Construct Validity Evidence: The Sport Multidimensional Perfectionism Scale 2

    ERIC Educational Resources Information Center

    Gotwals, John K.; Dunn, John G. H.

    2009-01-01

    This article presents a chronology of three empirical studies that outline the measurement process by which two new subscales ("Doubts about Actions" and "Organization") were developed and integrated into a revised version of Dunn, Causgrove Dunn, and Syrotuik's (2002) "Sport Multidimensional Perfectionism Scale"…

  8. An Alternative Approach to Conceptualizing Interviews in HRD Research

    ERIC Educational Resources Information Center

    Wang, Jia; Roulston, Kathryn J.

    2007-01-01

    Qualitative researchers in human resource development (HRD) frequently use in-depth interviews as a research method. Yet reports from qualitative studies in HRD commonly pay little or no analytical attention to the co-construction of interview data. That is, reports of qualitative research projects often treat interviews as a transparent method of…

  9. A Constructive Approach to Regularity of Lagrangian Trajectories for Incompressible Euler Flow in a Bounded Domain

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Frisch, Uriel

    2017-04-01

    The 3D incompressible Euler equations are an important research topic in the mathematical study of fluid dynamics. Not only is the global regularity for smooth initial data an open issue, but the behaviour may also depend on the presence or absence of boundaries. For a good understanding, it is crucial to carry out, besides mathematical studies, high-accuracy and well-resolved numerical exploration. Such studies can be very demanding in computational resources, but recently it has been shown that very substantial gains can be achieved first, by using Cauchy's Lagrangian formulation of the Euler equations and second, by taking advantage of analyticity results of the Lagrangian trajectories for flows whose initial vorticity is Hölder-continuous. The latter has been known for about 20 years (Serfati in J Math Pures Appl 74:95-104, 1995), but the combination of the two, which makes use of recursion relations among time-Taylor coefficients to obtain constructively the time-Taylor series of the Lagrangian map, has been achieved only recently (Frisch and Zheligovsky in Commun Math Phys 326:499-505, 2014; Podvigina et al. in J Comput Phys 306:320-342, 2016 and references therein). Here we extend this methodology to incompressible Euler flow in an impermeable bounded domain whose boundary may be either analytic or have a regularity between indefinite differentiability and analyticity. Non-constructive regularity results for these cases have already been obtained by Glass et al. (Ann Sci Éc Norm Sup 45:1-51, 2012). Using the invariance of the boundary under the Lagrangian flow, we establish novel recursion relations that include contributions from the boundary. This leads to a constructive proof of time-analyticity of the Lagrangian trajectories with analytic boundaries, which can then be used subsequently for the design of a very high-order Cauchy-Lagrangian method.

  10. Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis

    NASA Astrophysics Data System (ADS)

    Chou, Hui-Yu; Yang, Jyh-Bin

    2017-10-01

    The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.

  11. Horizontal lifelines - review of regulations and simple design method considering anchorage rigidity.

    PubMed

    Galy, Bertrand; Lan, André

    2018-03-01

    Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.

  12. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Analytic Causative Constructions in Medieval Spanish: The Origins of a Construction

    ERIC Educational Resources Information Center

    Sanaphre Villanueva, Monica

    2011-01-01

    The goal of this study is to provide an inventory of the Analytic Causative constructions that were in use in Peninsular Spanish from the 12th to the 16th centuries from the constructional perspective of Cognitive Grammar. A detailed profile of each construction was made including its constructional schema along with relevant semantic, syntactic,…

  14. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  15. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  16. Health level seven interoperability strategy: big data, incrementally structured.

    PubMed

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  17. A Simple Laser Induced Breakdown Spectroscopy (LIBS) System for Use at Multiple Levels in the Undergraduate Chemistry Curriculum

    ERIC Educational Resources Information Center

    Randall, David W.; Hayes, Ryan T.; Wong, Peter A.

    2013-01-01

    A LIBS (laser induced breakdown spectroscopy) spectrometer constructed by the instructor is reported for use in undergraduate analytical chemistry experiments. The modular spectrometer described here is based on commonly available components including a commercial Nd:YAG laser and a compact UV-vis spectrometer. The modular approach provides a…

  18. Global Warming Wars: Rhetorical and Discourse Analytic Approaches to ExxonMobil's Corporate Public Discourse.

    ERIC Educational Resources Information Center

    Livesey, Sharon M.

    2002-01-01

    Analyzes texts published by ExxonMobil on the issue of climate change, employing both rhetorical analysis and discourse analysis to show their uses and potential value in business communication research. Shows how both reveal the socially constructed nature "reality" and the social effects of language, but are never the less distinct in…

  19. Synchronous and Asynchronous E-Language Learning: A Case Study of Virtual University of Pakistan

    ERIC Educational Resources Information Center

    Perveen, Ayesha

    2016-01-01

    This case study evaluated the impact of synchronous and asynchronous E-Language Learning activities (ELL-ivities) in an E-Language Learning Environment (ELLE) at Virtual University of Pakistan. The purpose of the study was to assess e-language learning analytics based on the constructivist approach of collaborative construction of knowledge. The…

  20. A Discourse Analytic Approach to Video Analysis of Teaching: Aligning Desired Identities with Practice

    ERIC Educational Resources Information Center

    Schieble, Melissa; Vetter, Amy; Meacham, Mark

    2015-01-01

    The authors present findings from a qualitative study of an experience that supports teacher candidates to use discourse analysis and positioning theory to analyze videos of their practice during student teaching. The research relies on the theoretical concept that learning to teach is an identity process. In particular, teachers construct and…

  1. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  2. Riccati parameterized self-similar waves in two-dimensional graded-index waveguide

    NASA Astrophysics Data System (ADS)

    Kumar De, Kanchan; Goyal, Amit; Raju, Thokala Soloman; Kumar, C. N.; Panigrahi, Prasanta K.

    2015-04-01

    An analytical method based on gauge-similarity transformation technique has been employed for mapping a (2+1)- dimensional variable coefficient coupled nonlinear Schrödinger equations (vc-CNLSE) with dispersion, nonlinearity and gain to standard NLSE. Under certain functional relations we construct a large family of self-similar waves in the form of bright similaritons, Akhmediev breathers and rogue waves. We report the effect of dispersion on the intensity of the solitary waves. Further, we illustrate the procedure to amplify the intensity of self-similar waves using isospectral Hamiltonian approach. This approach provides an efficient mechanism to generate analytically a wide class of tapering profiles and widths by exploiting the Riccati parameter. Equivalently, it enables one to control efficiently the self-similar wave structures and hence their evolution.

  3. Models of dyadic social interaction.

    PubMed Central

    Griffin, Dale; Gonzalez, Richard

    2003-01-01

    We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382

  4. Accurate quantification of PGE2 in the polyposis in rat colon (Pirc) model by surrogate analyte-based UPLC-MS/MS.

    PubMed

    Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming

    2018-01-30

    An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Analyzing the Heterogeneous Hierarchy of Cultural Heritage Materials: Analytical Imaging.

    PubMed

    Trentelman, Karen

    2017-06-12

    Objects of cultural heritage significance are created using a wide variety of materials, or mixtures of materials, and often exhibit heterogeneity on multiple length scales. The effective study of these complex constructions thus requires the use of a suite of complementary analytical technologies. Moreover, because of the importance and irreplaceability of most cultural heritage objects, researchers favor analytical techniques that can be employed noninvasively, i.e., without having to remove any material for analysis. As such, analytical imaging has emerged as an important approach for the study of cultural heritage. Imaging technologies commonly employed, from the macroscale through the micro- to nanoscale, are discussed with respect to how the information obtained helps us understand artists' materials and methods, the cultures in which the objects were created, how the objects may have changed over time, and importantly, how we may develop strategies for their preservation.

  6. Reopening the dialogue between the theory of social representations and discursive psychology for examining the construction and transformation of meaning in discourse and communication.

    PubMed

    Batel, Susana; Castro, Paula

    2018-06-28

    The theory of social representations (TSR) and discursive psychology (DP) originated as different social psychological approaches and have at times been presented as incompatible. However, along the years convergence has also been acknowledged, and, lately, most of all, practised. With this paper, we discuss how versions of TSR focusing on self-other relations for examining cultural meaning systems in/through communication, and versions of DP focusing on discourse at cultural, ideological, and interactional levels, can come together. The goal is to help forge a stronger social-psychological exploration of how meaning is constructed and transformed in and through language, discourse, and communication, thus extending current understanding of social change. After presenting a theoretical proposal for integrating those versions of TSR and DP, we offer also an integrated analytical strategy. We suggest that together these proposals can, on one hand, help TSR systematize analyses of social change that are both more critical and better grounded in theorizations of language use, and, on the other, provide DP with analytical tools able to better examine both the relational contexts where the construction and transformation of meaning are performed and their effects on discourse. Finally, we give some illustrations of the use of this analytical strategy. © 2018 The British Psychological Society.

  7. The brain basis of emotion: A meta-analytic review

    PubMed Central

    Lindquist, Kristen A.; Wager, Tor D.; Kober, Hedy; Bliss-Moreau, Eliza; Barrett, Lisa Feldman

    2015-01-01

    Researchers have wondered how the brain creates emotions since the early days of psychological science. With a surge of studies in affective neuroscience in recent decades, scientists are poised to answer this question. In this article, we present a meta-analytic summary of the human neuroimaging literature on emotion. We compare the locationist approach (i.e., the hypothesis that discrete emotion categories consistently and specifically correspond to distinct brain regions) with the psychological constructionist approach (i.e., the hypothesis that discrete emotion categories are constructed of more general brain networks not specific to those categories) to better understand the brain basis of emotion. We review both locationist and psychological constructionist hypotheses of brain–emotion correspondence and report meta-analytic findings bearing on these hypotheses. Overall, we found little evidence that discrete emotion categories can be consistently and specifically localized to distinct brain regions. Instead, we found evidence that is consistent with a psychological constructionist approach to the mind: a set of interacting brain regions commonly involved in basic psychological operations of both an emotional and non-emotional nature are active during emotion experience and perception across a range of discrete emotion categories. PMID:22617651

  8. Construct Validation of Analytic Rating Scales in a Speaking Assessment: Reporting a Score Profile and a Composite

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo

    2007-01-01

    This is a construct validation study of a second language speaking assessment that reported a language profile based on analytic rating scales and a composite score. The study addressed three key issues: score dependability, convergent/discriminant validity of analytic rating scales and the weighting of analytic ratings in the composite score.…

  9. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  10. What is the right formalism to search for resonances?

    NASA Astrophysics Data System (ADS)

    Mikhasenko, M.; Pilloni, A.; Nys, J.; Albaladejo, M.; Fernández-Ramírez, C.; Jackura, A.; Mathieu, V.; Sherrill, N.; Skwarnicki, T.; Szczepaniak, A. P.

    2018-03-01

    Hadron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. Hereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B→ ψ π K and B→ \\bar{D}π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.

  11. Semiclassical description of resonance-assisted tunneling in one-dimensional integrable models

    NASA Astrophysics Data System (ADS)

    Le Deunff, Jérémy; Mouchet, Amaury; Schlagheck, Peter

    2013-10-01

    Resonance-assisted tunneling is investigated within the framework of one-dimensional integrable systems. We present a systematic recipe, based on Hamiltonian normal forms, to construct one-dimensional integrable models that exhibit resonance island chain structures with accurately controlled sizes and positions of the islands. Using complex classical trajectories that evolve along suitably defined paths in the complex time domain, we construct a semiclassical theory of the resonance-assisted tunneling process. This semiclassical approach yields a compact analytical expression for tunnelling-induced level splittings which is found to be in very good agreement with the exact splittings obtained through numerical diagonalization.

  12. Problem Solving in a Middle School Robotics Design Classroom

    NASA Astrophysics Data System (ADS)

    Norton, Stephen J.; McRobbie, Campbell J.; Ginns, Ian S.

    2007-07-01

    Little research has been conducted on how students work when they are required to plan, build and evaluate artefacts in technology rich learning environments such as those supported by tools including flow charts, Labview programming and Lego construction. In this study, activity theory was used as an analytic tool to examine the social construction of meaning. There was a focus on the effect of teachers’ goals and the rules they enacted upon student use of the flow chart planning tool, and the tools of the programming language Labview and Lego construction. It was found that the articulation of a teacher’s goals via rules and divisions of labour helped to form distinct communities of learning and influenced the development of different problem solving strategies. The use of the planning tool flow charting was associated with continuity of approach, integration of problem solutions including appreciation of the nexus between construction and programming, and greater educational transformation. Students who flow charted defined problems in a more holistic way and demonstrated more methodical, insightful and integrated approaches to their use of tools. The findings have implications for teaching in design dominated learning environments.

  13. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  14. A Variational Approach to the Analysis of Dissipative Electromechanical Systems

    PubMed Central

    Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek

    2014-01-01

    We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221

  15. Holistic versus Analytic Processing: Evidence for a Different Approach to Processing of Chinese at the Word and Character Levels in Chinese Children

    ERIC Educational Resources Information Center

    Liu, Phil D.; Chung, Kevin K. H.; McBride-Chang, Catherine; Tong, Xiuhong

    2010-01-01

    Among 30 Hong Kong Chinese fourth graders, sensitivities to character and word constructions were examined in judgment tasks at each level. There were three conditions across both tasks: the real condition, consisting of either actual two-character compound Chinese words or real Chinese compound characters; the reversed condition, with either the…

  16. A new approach to exact optical soliton solutions for the nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Morales-Delgado, V. F.; Gómez-Aguilar, J. F.; Baleanu, Dumitru

    2018-05-01

    By using the modified homotopy analysis transform method, we construct the analytical solutions of the space-time generalized nonlinear Schrödinger equation involving a new fractional conformable derivative in the Liouville-Caputo sense and the fractional-order derivative with the Mittag-Leffler law. Employing theoretical parameters, we present some numerical simulations and compare the solutions obtained.

  17. Collaboration and Synergy among Government, Industry and Academia in M&S Domain: Turkey’s Approach

    DTIC Science & Technology

    2009-10-01

    Analysis, Decision Support System Design and Implementation, Simulation Output Analysis, Statistical Data Analysis, Virtual Reality , Artificial... virtual and constructive visual simulation systems as well as integrated advanced analytical models. Collaboration and Synergy among Government...simulation systems that are ready to use, credible, integrated with C4ISR systems.  Creating synthetic environments and/or virtual prototypes of concepts

  18. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  19. Analysis of THG modes for femtosecond laser pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Sidorov, Pavel S.

    2017-05-01

    THG is used nowadays in many practical applications such as a substance diagnostics, and biological objects imaging, and etc. With developing of new materials and technology (for example, photonic crystal) an attention to THG process analysis grow. Therefore, THG features understanding are a modern problem. Early we have developed new analytical approach based on using the problem invariant for analytical solution construction of the THG process. It should be stressed that we did not use a basic wave non-depletion approximation. Nevertheless, a long pulse duration approximation and plane wave approximation has applied. The analytical solution demonstrates, in particular, an optical bistability property (and may other regimes of frequency tripling) for the third harmonic generation process. But, obviously, this approach does not reflect an influence of a medium dispersion on the frequency tripling. Therefore, in this paper we analyze THG efficiency of a femtosecond laser pulse taking into account a second order dispersion affect as well as self- and crossmodulation of the interacting waves affect on the frequency conversion process. Analysis is made using a computer simulation on the base of Schrödinger equations describing the process under consideration.

  20. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    PubMed Central

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  1. Twist for Snyder space

    NASA Astrophysics Data System (ADS)

    Meljanac, Daniel; Meljanac, Stjepan; Mignemi, Salvatore; Pikutić, Danijel; Štrajn, Rina

    2018-03-01

    We construct the twist operator for the Snyder space. Our starting point is a non-associative star product related to a Hermitian realisation of the noncommutative coordinates originally introduced by Snyder. The corresponding coproduct of momenta is non-coassociative. The twist is constructed using a general definition of the star product in terms of a bi-differential operator in the Hopf algebroid approach. The result is given by a closed analytical expression. We prove that this twist reproduces the correct coproducts of the momenta and the Lorentz generators. The twisted Poincaré symmetry is described by a non-associative Hopf algebra, while the twisted Lorentz symmetry is described by the undeformed Hopf algebra. This new twist might be important in the construction of different types of field theories on Snyder space.

  2. An approach to estimate spatial distribution of analyte within cells using spectrally-resolved fluorescence microscopy.

    PubMed

    Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam

    2017-01-18

    While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels.

  3. An approach to estimate spatial distribution of analyte within cells using spectrally-resolved fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam

    2017-03-01

    While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels. Dedicated to Professor Kankan Bhattacharyya.

  4. Design of the stabilizing control of the orbital motion in the vicinity of the collinear libration point L1 using the analytical representation of the invariant manifold

    NASA Astrophysics Data System (ADS)

    Maliavkin, G. P.; Shmyrov, A. S.; Shmyrov, V. A.

    2018-05-01

    Vicinities of collinear libration points of the Sun-Earth system are currently quite attractive for the space navigation. Today, various projects on placing of spacecrafts observing the Sun in the L1 libration point and telescopes in L2 have been implemented (e.g. spacecrafts "WIND", "SOHO", "Herschel", "Planck"). Collinear libration points being unstable leads to the problem of stabilization of a spacecraft's motion. Laws of stabilizing motion control in vicinity of L1 point can be constructed using the analytical representation of a stable invariant manifold. Efficiency of these control laws depends on the precision of the representation. Within the model of Hill's approximation of the circular restricted three-body problem in the rotating geocentric coordinate system one can obtain the analytical representation of an invariant manifold filled with bounded trajectories in a form of series in terms of powers of the phase variables. Approximate representations of the orders from the first to the fourth inclusive can be used to construct four laws of stabilizing feedback motion control under which trajectories approach the manifold. By virtue of numerical simulation the comparison can be made: how the precision of the representation of the invariant manifold influences the efficiency of the control, expressed by energy consumptions (characteristic velocity). It shows that using approximations of higher orders in constructing the control laws can significantly reduce the energy consumptions on implementing the control compared to the linear approximation.

  5. What is the right formalism to search for resonances?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhasenko, M.; Pilloni, A.; Nys, J.

    Hmore » adron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. ereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B → ψ π K and B → D ¯ π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.« less

  6. What is the right formalism to search for resonances?

    DOE PAGES

    Mikhasenko, M.; Pilloni, A.; Nys, J.; ...

    2018-03-17

    Hmore » adron decay chains constitute one of the main sources of information on the QCD spectrum. We discuss the differences between several partial wave analysis formalisms used in the literature to build the amplitudes. We match the helicity amplitudes to the covariant tensor basis. ereby, we pay attention to the analytical properties of the amplitudes and separate singularities of kinematical and dynamical nature. We study the analytical properties of the spin-orbit (LS) formalism, and some of the covariant tensor approaches. In particular, we explicitly build the amplitudes for the B → ψ π K and B → D ¯ π π decays, and show that the energy dependence of the covariant approach is model dependent. We also show that the usual recursive construction of covariant tensors explicitly violates crossing symmetry, which would lead to different resonance parameters extracted from scattering and decay processes.« less

  7. Rotation forms and local Hamiltonian monodromy

    NASA Astrophysics Data System (ADS)

    Efstathiou, K.; Giacobbe, A.; Mardešić, P.; Sugny, D.

    2017-02-01

    The monodromy of torus bundles associated with completely integrable systems can be computed using geometric techniques (constructing homology cycles) or analytic arguments (computing discontinuities of abelian integrals). In this article, we give a general approach to the computation of monodromy that resembles the analytical one, reducing the problem to the computation of residues of polar 1-forms. We apply our technique to three celebrated examples of systems with monodromy (the champagne bottle, the spherical pendulum, the hydrogen atom) and to the case of non-degenerate focus-focus singularities, re-obtaining the classical results. An advantage of this approach is that the residue-like formula can be shown to be local in a neighborhood of a singularity, hence allowing the definition of monodromy also in the case of non-compact fibers. This idea has been introduced in the literature under the name of scattering monodromy. We prove the coincidence of the two definitions with the monodromy of an appropriately chosen compactification.

  8. "You" and "I," "us " and them: a systemic-discursive approach to the study of ethnic stereotypes in the context of British-Greek heterosexual couple relationships.

    PubMed

    Tseliou, Eleftheria; Eisler, Ivan

    2007-12-01

    Systemic family therapy accounts of ethnic stereotypes in the context of ethnically mixed couple relationships have tended to focus on the interpersonal-psychological realm of the couple relationship. Discourse analytic research, on the other hand, has highlighted the role of such stereotypes in the construction of national identity and has stressed the importance of a historical and ideological approach. In this article, we will present our attempt to develop a systemic-discursive approach to the study of stereotypes in the particular context of British-Greek heterosexual couple relationships by building on both fields.

  9. Assessment of wastewater treatment alternatives for small communities: An analytic network process approach.

    PubMed

    Molinos-Senante, María; Gómez, Trinidad; Caballero, Rafael; Hernández-Sancho, Francesc; Sala-Garrido, Ramón

    2015-11-01

    The selection of the most appropriate wastewater treatment (WWT) technology is a complex problem since many alternatives are available and many criteria are involved in the decision-making process. To deal with this challenge, the analytic network process (ANP) is applied for the first time to rank a set of seven WWT technology set-ups for secondary treatment in small communities. A major advantage of ANP is that it incorporates interdependent relationships between elements. Results illustrated that extensive technologies, constructed wetlands and pond systems are the most preferred alternatives by WWT experts. The sensitivity analysis performed verified that the ranking of WWT alternatives is very stable since constructed wetlands are almost always placed in the first position. This paper showed that ANP analysis is suitable to deal with complex decision-making problems, such as the selection of the most appropriate WWT system contributing to better understand the multiple interdependences among elements involved in the assessment. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Simulating ground water-lake interactions: Approaches and insights

    USGS Publications Warehouse

    Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.

    2003-01-01

    Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.

  11. On the first crossing distributions in fractional Brownian motion and the mass function of dark matter haloes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr

    We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less

  12. Multidisciplinary design and analytic approaches to advance prospective research on the multilevel determinants of child health.

    PubMed

    Johnson, Sara B; Little, Todd D; Masyn, Katherine; Mehta, Paras D; Ghazarian, Sharon R

    2017-06-01

    Characterizing the determinants of child health and development over time, and identifying the mechanisms by which these determinants operate, is a research priority. The growth of precision medicine has increased awareness and refinement of conceptual frameworks, data management systems, and analytic methods for multilevel data. This article reviews key methodological challenges in cohort studies designed to investigate multilevel influences on child health and strategies to address them. We review and summarize methodological challenges that could undermine prospective studies of the multilevel determinants of child health and ways to address them, borrowing approaches from the social and behavioral sciences. Nested data, variation in intervals of data collection and assessment, missing data, construct measurement across development and reporters, and unobserved population heterogeneity pose challenges in prospective multilevel cohort studies with children. We discuss innovations in missing data, innovations in person-oriented analyses, and innovations in multilevel modeling to address these challenges. Study design and analytic approaches that facilitate the integration across multiple levels, and that account for changes in people and the multiple, dynamic, nested systems in which they participate over time, are crucial to fully realize the promise of precision medicine for children and adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. [Academic review of global health approaches: an analytical framework].

    PubMed

    Franco-Giraldo, Alvaro

    2015-09-01

    In order to identify perspectives on global health, this essay analyzes different trends from academia that have enriched global health and international health. A database was constructed with information from the world's leading global health centers. The search covered authors on global diplomacy and global health and was performed in PubMed, LILACS, and Google Scholar with the key words "global health" and "international health". Research and training centers in different countries have taken various academic approaches to global health; various interests and ideological orientations have emerged in relation to the global health concept. Based on the mosaic of global health centers and their positions, the review concludes that the new concept reflects the construction of a paradigm of renewal in international health and global health, the pre-paradigmatic stage of which has still not reached a final version.

  14. Selection of area-level variables from administrative data: an intersectional approach to the study of place and child development.

    PubMed

    Kershaw, Paul; Forer, Barry

    2010-05-01

    Given data limitations, neighborhood effects scholarship relies heavily on administrative data to measure area-level constructs. We provide new evidence to guide the selection of indicators from routinely collected sources, focusing on effects on early child development. Informed by an analytic paradigm attuned to the intersection of race, class, and sex, along with population-level data in British Columbia, Canada, our findings signal the need for greater precision when choosing variables in place of the now dominant approaches for measuring constructs like income/wealth, employment, family structure and race/ethnicity. We also provide new evidence about which area-level variables associate with the different domains of child development, as well as how area-level associations vary across urban and rural contexts. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  16. Tree tensor network approach to simulating Shor's algorithm

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Eugene

    2017-12-01

    Constructively simulating quantum systems furthers our understanding of qualitative and quantitative features which may be analytically intractable. In this paper, we directly simulate and explore the entanglement structure present in the paradigmatic example for exponential quantum speedups: Shor's algorithm. To perform our simulation, we construct a dynamic tree tensor network which manifestly captures two salient circuit features for modular exponentiation. These are the natural two-register bipartition and the invariance of entanglement with respect to permutations of the top-register qubits. Our construction help identify the entanglement entropy properties, which we summarize by a scaling relation. Further, the tree network is efficiently projected onto a matrix product state from which we efficiently execute the quantum Fourier transform. Future simulation of quantum information states with tensor networks exploiting circuit symmetries is discussed.

  17. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    PubMed

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  18. Matrix Effect Compensation in Small-Molecule Profiling for an LC-TOF Platform Using Multicomponent Postcolumn Infusion.

    PubMed

    González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas

    2015-06-16

    The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.

  19. Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.

    PubMed

    Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark

    2016-03-16

    The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.

  20. CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP627). INL PHOTO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP-627). INL PHOTO NUMBER NRTS-54-12124. Unknown Photographer, 9/21/1954 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID

  1. Evaluation Criteria for Micro-CAI: A Psychometric Approach

    PubMed Central

    Wallace, Douglas; Slichter, Mark; Bolwell, Christine

    1985-01-01

    The increased use of microcomputer-based instructional programs has resulted in a greater need for third-party evaluation of the software. This in turn has prompted the development of micro-CAI evaluation tools. The present project sought to develop a prototype instrument to assess the impact of CAI program presentation characteristics on students. Data analysis and scale construction was conducted using standard item reliability analyses and factor analytic techniques. Adequate subscale reliabilities and factor structures were found, suggesting that a psychometric approach to CAI evaluation may possess some merit. Efforts to assess the utility of the resultant instrument are currently underway.

  2. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    DTIC Science & Technology

    2014-09-01

    training phase and a fusion of the detector outputs. 6.3.1 Training Techniques 1. Bagging: The basic idea of Bagging is to train multiple classifiers...can reduce more noise interesting points. Person detection and background subtraction methods were used to create hot regions. The hot regions were...detection algorithms are incorporated with MHT to construct one integrated detector /tracker. 6.8 IRDS-CASIA team IRDS-CASIA proposed a method to solve a

  3. Hardware demonstration of flexible beam control

    NASA Technical Reports Server (NTRS)

    Schaechter, D. B.

    1980-01-01

    An experiment employing a pinned-free flexible beam has been constructed to demonstrate and verify several facets of the control of flexible structures. The desired features of the experiment are to demonstrate active shape control, active dynamic control, adaptive control, various control law design approaches, and associated hardware requirements and mechanization difficulties. This paper contains the analytical work performed in support of the facility development, the final design specifications, control law synthesis, and some preliminary results.

  4. An Environmental Management Maturity Model of Construction Programs Using the AHP-Entropy Approach.

    PubMed

    Bai, Libiao; Wang, Hailing; Huang, Ning; Du, Qiang; Huang, Youdan

    2018-06-23

    The accelerating process of urbanization in China has led to considerable opportunities for the development of construction projects, however, environmental issues have become an important constraint on the implementation of these projects. To quantitatively describe the environmental management capabilities of such projects, this paper proposes a 2-dimensional Environmental Management Maturity Model of Construction Program (EMMMCP) based on an analysis of existing projects, group management theory and a management maturity model. In this model, a synergetic process was included to compensate for the lack of consideration of synergies in previous studies, and it was involved in the construction of the first dimension, i.e., the environmental management index system. The second dimension, i.e., the maturity level of environment management, was then constructed by redefining the hierarchical characteristics of construction program (CP) environmental management maturity. Additionally, a mathematical solution to this proposed model was derived via the Analytic Hierarchy Process (AHP)-entropy approach. To verify the effectiveness and feasibility of this proposed model, a computational experiment was conducted, and the results show that this approach could not only measure the individual levels of different processes, but also achieve the most important objective of providing a reference for stakeholders when making decisions on the environmental management of construction program, which reflects this model is reasonable for evaluating the level of environmental management maturity in CP. To our knowledge, this paper is the first study to evaluate the environmental management maturity levels of CP, which would fill the gap between project program management and environmental management and provide a reference for relevant management personnel to enhance their environmental management capabilities.

  5. Land-use evaluation for sustainable construction in a protected area: A case of Sara mountain national park.

    PubMed

    Ristić, Vladica; Maksin, Marija; Nenković-Riznić, Marina; Basarić, Jelena

    2018-01-15

    The process of making decisions on sustainable development and construction begins in spatial and urban planning when defining the suitability of using land for sustainable construction in a protected area (PA) and its immediate and regional surroundings. The aim of this research is to propose and assess a model for evaluating land-use suitability for sustainable construction in a PA and its surroundings. The methodological approach of Multi-Criteria Decision Analysis was used in the formation of this model and adapted for the research; it was combined with the adapted Analytical hierarchy process and the Delphi process, and supported by a geographical information system (GIS) within the framework of ESRI ArcGIS software - Spatial analyst. The model is applied to the case study of Sara mountain National Park in Kosovo. The result of the model is a "map of integrated assessment of land-use suitability for sustainable construction in a PA for the natural factor". Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP627). INL PHOTO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP-627). INL PHOTO NUMBER NRTS-54-12573. R.G. Larsen, Photographer, 10/20/1954 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID

  7. CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP627) SHOWING INITIAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING INITIAL EXCAVATION. INL PHOTO NUMBER NRTS-54-10703. Unknown Photographer, 5/21/1954 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID

  8. A novel approach to the simultaneous extraction and non-targeted analysis of the small molecules metabolome and lipidome using 96-well solid phase extraction plates with column-switching technology.

    PubMed

    Li, Yubo; Zhang, Zhenzhu; Liu, Xinyu; Li, Aizhu; Hou, Zhiguo; Wang, Yuming; Zhang, Yanjun

    2015-08-28

    This study combines solid phase extraction (SPE) using 96-well plates with column-switching technology to construct a rapid and high-throughput method for the simultaneous extraction and non-targeted analysis of small molecules metabolome and lipidome based on ultra-performance liquid chromatography quadrupole time-of-flight mass spectrometry. This study first investigated the columns and analytical conditions for small molecules metabolome and lipidome, separated by an HSS T3 and BEH C18 columns, respectively. Next, the loading capacity and actuation duration of SPE were further optimized. Subsequently, SPE and column switching were used together to rapidly and comprehensively analyze the biological samples. The experimental results showed that the new analytical procedure had good precision and maintained sample stability (RSD<15%). The method was then satisfactorily applied to more widely analyze the small molecules metabolome and lipidome to test the throughput. The resulting method represents a new analytical approach for biological samples, and a highly useful tool for researches in metabolomics and lipidomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Analytical solutions to non-Fickian subsurface dispersion in uniform groundwater flow

    USGS Publications Warehouse

    Zou, S.; Xia, J.; Koussis, Antonis D.

    1996-01-01

    Analytical solutions are obtained by the Fourier transform technique for the one-, two-, and three-dimensional transport of a conservative solute injected instantaneously in a uniform groundwater flow. These solutions account for dispersive non-linearity caused by the heterogeneity of the hydraulic properties of aquifer systems and can be used as building blocks to construct solutions by convolution (principle of superposition) for source conditions other than slug injection. The dispersivity is assumed to vary parabolically with time and is thus constant for the entire system at any given time. Two approaches for estimating time-dependent dispersion parameters are developed for two-dimensional plumes. They both require minimal field tracer test data and, therefore, represent useful tools for assessing real-world aquifer contamination sites. The first approach requires mapped plume-area measurements at two specific times after the tracer injection. The second approach requires concentration-versus-time data from two sampling wells through which the plume passes. Detailed examples and comparisons with other procedures show that the methods presented herein are sufficiently accurate and easier to use than other available methods.

  10. Concept mapping and network analysis: an analytic approach to measure ties among constructs.

    PubMed

    Goldman, Alyssa W; Kane, Mary

    2014-12-01

    Group concept mapping is a mixed-methods approach that helps a group visually represent its ideas on a topic of interest through a series of related maps. The maps and additional graphics are useful for planning, evaluation and theory development. Group concept maps are typically described, interpreted and utilized through points, clusters and distances, and the implications of these features in understanding how constructs relate to one another. This paper focuses on the application of network analysis to group concept mapping to quantify the strength and directionality of relationships among clusters. The authors outline the steps of this analysis, and illustrate its practical use through an organizational strategic planning example. Additional benefits of this analysis to evaluation projects are also discussed, supporting the overall utility of this supplemental technique to the standard concept mapping methodology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP627) SHOWING PLACEMENT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONSTRUCTION PROGRESS PHOTO OF REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING PLACEMENT OF PIERS. INL PHOTO NUMBER NRTS-54-11716. Unknown Photographer, 8/20/1954 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID

  12. Communication: On the diffusion tensor in macroscopic theory of cavitation

    NASA Astrophysics Data System (ADS)

    Shneidman, Vitaly A.

    2017-08-01

    The classical description of nucleation of cavities in a stretched fluid relies on a one-dimensional Fokker-Planck equation (FPE) in the space of their sizes r, with the diffusion coefficient D(r) constructed for all r from macroscopic hydrodynamics and thermodynamics, as shown by Zeldovich. When additional variables (e.g., vapor pressure) are required to describe the state of a bubble, a similar approach to construct a diffusion tensor D ^ generally works only in the direct vicinity of the thermodynamic saddle point corresponding to the critical nucleus. It is shown, nevertheless, that "proper" kinetic variables to describe a cavity can be selected, allowing to introduce D ^ in the entire domain of parameters. In this way, for the first time, complete FPE's are constructed for viscous volatile and inertial fluids. In the former case, the FPE with symmetric D ^ is solved numerically. Alternatively, in the case of an inertial fluid, an equivalent Langevin equation is considered; results are compared with analytics. The suggested approach is quite general and can be applied beyond the cavitation problem.

  13. Toward an Empirical Multidimensional Structure of Anhedonia, Reward Sensitivity, and Positive Emotionality: An Exploratory Factor Analytic Study.

    PubMed

    Olino, Thomas M; McMakin, Dana L; Forbes, Erika E

    2016-11-20

    Positive emotionality, anhedonia, and reward sensitivity share motivational and experiential elements of approach motivation and pleasure. Earlier work has examined the interrelationships among these constructs from measures of extraversion. More recently, the Research Domain Criteria introduced the Positive Valence Systems as a primary dimension to better understand psychopathology. However, the suggested measures tapping this construct have not yet been integrated within the structural framework of personality, even at the level of self-report. Thus, this study conducted exploratory factor and exploratory bifactor analyses on 17 different dimensions relevant to approach motivation, spanning anhedonia, behavioral activation system functioning, and positive emotionality. Convergent validity of these dimensions is tested by examining associations with depressive symptoms. Relying on multiple indices of fit, our preferred model included a general factor along with specific factors of affiliation, positive emotion, assertiveness, and pleasure seeking. These factors demonstrated different patterns of association with depressive symptoms. We discuss the plausibility of this model and highlight important future directions for work on the structure of a broad Positive Valence Systems construct. © The Author(s) 2016.

  14. Development of lightweight aluminum compression panels reinforced by boron-epoxy infiltrated extrusions

    NASA Technical Reports Server (NTRS)

    Roy, P. A.; Mcelman, J. A.; Henshaw, J.

    1973-01-01

    Analytical and experimental studies were performed to evaluate the structural efficiencies afforded by the selective reinforcement of conventional aluminum compression panels with unidirectional boron epoxy composite materials. A unique approach for selective reinforcement was utilized called boron/epoxy infiltration. This technique uses extruded metal sections with preformed hollow voids into which unidirectional boron filaments are drawn and subsequently infiltrated with resin to form an integral part. Simplified analytical models were developed to investigate the behavior of stiffener webs with reinforced flanges. Theoretical results are presented demonstrating the effects of transverse shear, of the reinforcement, flange eccentricity and torsional stiffness in such construction. A series of 55 tests were conducted on boron-infiltrated rods and extruded structural sections.

  15. Statistical process control in nursing research.

    PubMed

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  16. Rayleigh approximation to ground state of the Bose and Coulomb glasses

    PubMed Central

    Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; Berlyand, L.

    2015-01-01

    Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Our findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties. PMID:25592417

  17. Structural and thermal testing of lightweight reflector panels

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Helms, R.; Hill, T.

    1992-01-01

    The paper describes the test facility developed for testing large lightweight reflective panels with very accurate and stable surfaces, such as the mirror panels of composite construction developed for the NASA's Precision Segmented Reflector (PSR). Special attention is given to the panel construction and the special problems posed by the characteristics of these panels; the design of the Optical/Thermal Vacuum test facility for structural and thermal testing, developed at the U.S. AFPL; and the testing procedure. The results of the PSR panel test program to date are presented. The test data showed that the analytical approaches used for the panel design and for the prediction of the on-orbit panel behavior were adequate.

  18. Derivation of general analytic gradient expressions for density-fitted post-Hartree-Fock methods: An efficient implementation for the density-fitted second-order Møller–Plesset perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozkaya, Uğur, E-mail: ugur.bozkaya@atauni.edu.tr

    General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller–Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele,more » Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.« less

  19. CONSTRUCTION PROGRESS PHOTO REMOTE ANALYTICAL FACILITY (CPP627) SHOWING EMPLACEMENT OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONSTRUCTION PROGRESS PHOTO REMOTE ANALYTICAL FACILITY (CPP-627) SHOWING EMPLACEMENT OF ROOF SLABS. INL PHOTO NUMBER NRTS-54-13463. R.G. Larsen, Photographer, 12/20/1954 - Idaho National Engineering Laboratory, Idaho Chemical Processing Plant, Fuel Reprocessing Complex, Scoville, Butte County, ID

  20. Ringo: Interactive Graph Analytics on Big-Memory Machines

    PubMed Central

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2016-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215

  1. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  2. Ringo: Interactive Graph Analytics on Big-Memory Machines.

    PubMed

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2015-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.

  3. ["This conversation is professional, [...] I am a painter!": resistence at a construction site].

    PubMed

    Landerdahl, Maria Celeste; Cortes, Laura Ferreira; Padoin, Stela Maris de Mello; Villela, Wilza Vieira

    2015-01-01

    to be familiar with the work relationships between females and their male colleagues at a construction site in the municipality of Santa Maria, RS. Exploratory research, qualitative approach with a female worker of a construction site in August 2012. Oral history as a device for data production. Analysis of French tradition speech as an analytical device. Work relationship with stances of resistance, with major fights against power of male order, pointing to shifts and continuities in the test for new gender behaviors. The conquest of spaces in paid work is not enough to achieve balance in gender relations; public policies on gender sensitizing contribute to changes in the cultural field by understanding that equal rights and opportunities between men and women are a basic condition for achieving justice, citizenship and development.

  4. Empirical Approach for Determining Axial Strength of Circular Concrete Filled Steel Tubular Columns

    NASA Astrophysics Data System (ADS)

    Jayalekshmi, S.; Jegadesh, J. S. Sankar; Goel, Abhishek

    2018-06-01

    The concrete filled steel tubular (CFST) columns are highly regarded in recent years as an interesting option in the construction field by designers and structural engineers, due to their exquisite structural performance, with enhanced load bearing capacity and energy absorption capacity. This study presents a new approach to simulate the capacity of circular CFST columns under axial loading condition, using a large database of experimental results by applying artificial neural network (ANN). A well trained network is established and is used to simulate the axial capacity of CFST columns. The validation and testing of the ANN is carried out. The current study is focused on proposing a simplified equation that can predict the ultimate strength of the axially loaded columns with high level of accuracy. The predicted results are compared with five existing analytical models which estimate the strength of the CFST column. The ANN-based equation has good prediction with experimental data, when compared with the analytical models.

  5. QFD-ANP Approach for the Conceptual Design of Research Vessels: A Case Study

    NASA Astrophysics Data System (ADS)

    Venkata Subbaiah, Kambagowni; Yeshwanth Sai, Koneru; Suresh, Challa

    2016-10-01

    Conceptual design is a subset of concept art wherein a new idea of product is created instead of a visual representation which would directly be used in a final product. The purpose is to understand the needs of conceptual design which are being used in engineering designs and to clarify the current conceptual design practice. Quality function deployment (QFD) is a customer oriented design approach for developing new or improved products and services to enhance customer satisfaction. House of quality (HOQ) has been traditionally used as planning tool of QFD which translates customer requirements (CRs) into design requirements (DRs). Factor analysis is carried out in order to reduce the CR portions of HOQ. The analytical hierarchical process is employed to obtain the priority ratings of CR's which are used in constructing HOQ. This paper mainly discusses about the conceptual design of an oceanographic research vessel using analytical network process (ANP) technique. Finally the QFD-ANP integrated methodology helps to establish the importance ratings of DRs.

  6. Empirical Approach for Determining Axial Strength of Circular Concrete Filled Steel Tubular Columns

    NASA Astrophysics Data System (ADS)

    Jayalekshmi, S.; Jegadesh, J. S. Sankar; Goel, Abhishek

    2018-03-01

    The concrete filled steel tubular (CFST) columns are highly regarded in recent years as an interesting option in the construction field by designers and structural engineers, due to their exquisite structural performance, with enhanced load bearing capacity and energy absorption capacity. This study presents a new approach to simulate the capacity of circular CFST columns under axial loading condition, using a large database of experimental results by applying artificial neural network (ANN). A well trained network is established and is used to simulate the axial capacity of CFST columns. The validation and testing of the ANN is carried out. The current study is focused on proposing a simplified equation that can predict the ultimate strength of the axially loaded columns with high level of accuracy. The predicted results are compared with five existing analytical models which estimate the strength of the CFST column. The ANN-based equation has good prediction with experimental data, when compared with the analytical models.

  7. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  8. Answer first: Applying the heuristic-analytic theory of reasoning to examine student intuitive thinking in the context of physics

    NASA Astrophysics Data System (ADS)

    Kryjevskaia, Mila; Stetzer, MacKenzie R.; Grosz, Nathaniel

    2014-12-01

    We have applied the heuristic-analytic theory of reasoning to interpret inconsistencies in student reasoning approaches to physics problems. This study was motivated by an emerging body of evidence that suggests that student conceptual and reasoning competence demonstrated on one task often fails to be exhibited on another. Indeed, even after instruction specifically designed to address student conceptual and reasoning difficulties identified by rigorous research, many undergraduate physics students fail to build reasoning chains from fundamental principles even though they possess the required knowledge and skills to do so. Instead, they often rely on a variety of intuitive reasoning strategies. In this study, we developed and employed a methodology that allowed for the disentanglement of student conceptual understanding and reasoning approaches through the use of sequences of related questions. We have shown that the heuristic-analytic theory of reasoning can be used to account for, in a mechanistic fashion, the observed inconsistencies in student responses. In particular, we found that students tended to apply their correct ideas in a selective manner that supported a specific and likely anticipated conclusion while neglecting to employ the same ideas to refute an erroneous intuitive conclusion. The observed reasoning patterns were consistent with the heuristic-analytic theory, according to which reasoners develop a "first-impression" mental model and then construct an argument in support of the answer suggested by this model. We discuss implications for instruction and argue that efforts to improve student metacognition, which serves to regulate the interaction between intuitive and analytical reasoning, is likely to lead to improved student reasoning.

  9. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496

  10. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    PubMed

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Collective Phase in Resource Competition in a Highly Diverse Ecosystem.

    PubMed

    Tikhonov, Mikhail; Monasson, Remi

    2017-01-27

    Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the nonintuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.

  12. On the dispersion relations for an inhomogeneous waveguide with attenuation

    NASA Astrophysics Data System (ADS)

    Vatul'yan, A. O.; Yurlov, V. O.

    2016-09-01

    Some general laws concerning the structure of dispersion relations for solid inhomogeneous waveguides with attenuation are studied. An approach based on the analysis of a first-order matrix differential equation is presented in the framework of the concept of complex moduli. Some laws concerning the structure of components of the dispersion set for a viscoelastic inhomogeneous cylindrical waveguide are studied analytically and numerically, and the asymptotics of components of the dispersion set are constructed for arbitrary inhomogeneity laws in the low-frequency region.

  13. Rayleigh approximation to ground state of the Bose and Coulomb glasses

    DOE PAGES

    Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; ...

    2015-01-16

    Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Ourmore » findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties.« less

  14. Relation between delayed feedback and delay-coupled systems and its application to chaotic lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soriano, Miguel C., E-mail: miguel@ifisc.uib-csic.es; Flunkert, Valentin; Fischer, Ingo

    2013-12-15

    We present a systematic approach to identify the similarities and differences between a chaotic system with delayed feedback and two mutually delay-coupled systems. We consider the general case in which the coupled systems are either unsynchronized or in a generally synchronized state, in contrast to the mostly studied case of identical synchronization. We construct a new time-series for each of the two coupling schemes, respectively, and present analytic evidence and numerical confirmation that these two constructed time-series are statistically equivalent. From the construction, it then follows that the distribution of time-series segments that are small compared to the overall delaymore » in the system is independent of the value of the delay and of the coupling scheme. By focusing on numerical simulations of delay-coupled chaotic lasers, we present a practical example of our findings.« less

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  16. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  17. Participatory flood vulnerability assessment: a multi-criteria approach

    NASA Astrophysics Data System (ADS)

    Madruga de Brito, Mariana; Evers, Mariele; Delos Santos Almoradie, Adrian

    2018-01-01

    This paper presents a participatory multi-criteria decision-making (MCDM) approach for flood vulnerability assessment while considering the relationships between vulnerability criteria. The applicability of the proposed framework is demonstrated in the municipalities of Lajeado and Estrela, Brazil. The model was co-constructed by 101 experts from governmental organizations, universities, research institutes, NGOs, and private companies. Participatory methods such as the Delphi survey, focus groups, and workshops were applied. A participatory problem structuration, in which the modellers work closely with end users, was used to establish the structure of the vulnerability index. The preferences of each participant regarding the criteria importance were spatially modelled through the analytical hierarchy process (AHP) and analytical network process (ANP) multi-criteria methods. Experts were also involved at the end of the modelling exercise for validation. The final product is a set of individual and group flood vulnerability maps. Both AHP and ANP proved to be effective for flood vulnerability assessment; however, ANP is preferred as it considers the dependences among criteria. The participatory approach enabled experts to learn from each other and acknowledge different perspectives towards social learning. The findings highlight that to enhance the credibility and deployment of model results, multiple viewpoints should be integrated without forcing consensus.

  18. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions

    PubMed Central

    Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter

    2016-01-01

    The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. PMID:27148130

  19. A mixed volume grid approach for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Jorgenson, Philip C. E.

    1996-01-01

    An approach for solving the compressible Euler and Navier-Stokes equations upon meshes composed of nearly arbitrary polyhedra is described. Each polyhedron is constructed from an arbitrary number of triangular and quadrilateral face elements, allowing the unified treatment of tetrahedral, prismatic, pyramidal, and hexahedral cells, as well the general cut cells produced by Cartesian mesh approaches. The basics behind the numerical approach and the resulting data structures are described. The accuracy of the mixed volume grid approach is assessed by performing a grid refinement study upon a series of hexahedral, tetrahedral, prismatic, and Cartesian meshes for an analytic inviscid problem. A series of laminar validation cases are made, comparing the results upon differing grid topologies to each other, to theory, and experimental data. A computation upon a prismatic/tetrahedral mesh is made simulating the laminar flow over a wall/cylinder combination.

  20. Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction

    ERIC Educational Resources Information Center

    Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna

    2015-01-01

    Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…

  1. Global Properties of Fully Convective Accretion Disks from Local Simulations

    NASA Astrophysics Data System (ADS)

    Bodo, G.; Cattaneo, F.; Mignone, A.; Ponzo, F.; Rossi, P.

    2015-08-01

    We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction is analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.

  2. Mapping copy number variation by population-scale genome sequencing.

    PubMed

    Mills, Ryan E; Walter, Klaudia; Stewart, Chip; Handsaker, Robert E; Chen, Ken; Alkan, Can; Abyzov, Alexej; Yoon, Seungtai Chris; Ye, Kai; Cheetham, R Keira; Chinwalla, Asif; Conrad, Donald F; Fu, Yutao; Grubert, Fabian; Hajirasouliha, Iman; Hormozdiari, Fereydoun; Iakoucheva, Lilia M; Iqbal, Zamin; Kang, Shuli; Kidd, Jeffrey M; Konkel, Miriam K; Korn, Joshua; Khurana, Ekta; Kural, Deniz; Lam, Hugo Y K; Leng, Jing; Li, Ruiqiang; Li, Yingrui; Lin, Chang-Yun; Luo, Ruibang; Mu, Xinmeng Jasmine; Nemesh, James; Peckham, Heather E; Rausch, Tobias; Scally, Aylwyn; Shi, Xinghua; Stromberg, Michael P; Stütz, Adrian M; Urban, Alexander Eckehart; Walker, Jerilyn A; Wu, Jiantao; Zhang, Yujun; Zhang, Zhengdong D; Batzer, Mark A; Ding, Li; Marth, Gabor T; McVean, Gil; Sebat, Jonathan; Snyder, Michael; Wang, Jun; Ye, Kenny; Eichler, Evan E; Gerstein, Mark B; Hurles, Matthew E; Lee, Charles; McCarroll, Steven A; Korbel, Jan O

    2011-02-03

    Genomic structural variants (SVs) are abundant in humans, differing from other forms of variation in extent, origin and functional impact. Despite progress in SV characterization, the nucleotide resolution architecture of most SVs remains unknown. We constructed a map of unbalanced SVs (that is, copy number variants) based on whole genome DNA sequencing data from 185 human genomes, integrating evidence from complementary SV discovery approaches with extensive experimental validations. Our map encompassed 22,025 deletions and 6,000 additional SVs, including insertions and tandem duplications. Most SVs (53%) were mapped to nucleotide resolution, which facilitated analysing their origin and functional impact. We examined numerous whole and partial gene deletions with a genotyping approach and observed a depletion of gene disruptions amongst high frequency deletions. Furthermore, we observed differences in the size spectra of SVs originating from distinct formation mechanisms, and constructed a map of SV hotspots formed by common mechanisms. Our analytical framework and SV map serves as a resource for sequencing-based association studies.

  3. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

    DOEpatents

    Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

    1990-01-01

    The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

  4. Novel approaches to analysis by flow injection gradient titration.

    PubMed

    Wójtowicz, Marzena; Kozak, Joanna; Kościelniak, Paweł

    2007-09-26

    Two novel procedures for flow injection gradient titration with the use of a single stock standard solution are proposed. In the multi-point single-line (MP-SL) method the calibration graph is constructed on the basis of a set of standard solutions, which are generated in a standard reservoir and subsequently injected into the titrant. According to the single-point multi-line (SP-ML) procedure the standard solution and a sample are injected into the titrant stream from four loops of different capacities, hence four calibration graphs are able to be constructed and the analytical result is calculated on the basis of a generalized slope of these graphs. Both approaches have been tested on the example of spectrophotometric acid-base titration of hydrochloric and acetic acids with using bromothymol blue and phenolphthalein as indicators, respectively, and sodium hydroxide as a titrant. Under optimized experimental conditions the analytical results of precision less than 1.8 and 2.5% (RSD) and of accuracy less than 3.0 and 5.4% (relative error (RE)) were obtained for MP-SL and SP-ML procedures, respectively, in ranges of 0.0031-0.0631 mol L(-1) for samples of hydrochloric acid and of 0.1680-1.7600 mol L(-1) for samples of acetic acid. The feasibility of both methods was illustrated by applying them to the total acidity determination in vinegar samples with precision lower than 0.5 and 2.9% (RSD) for MP-SL and SP-ML procedures, respectively.

  5. Enrichment and characterization of ferritin for nanomaterial applications

    NASA Astrophysics Data System (ADS)

    Ghirlando, Rodolfo; Mutskova, Radina; Schwartz, Chad

    2016-01-01

    Ferritin is a ubiquitous iron storage protein utilized as a nanomaterial for labeling biomolecules and nanoparticle construction. Commercially available preparations of horse spleen ferritin, widely used as a starting material, contain a distribution of ferritins with different iron loads. We describe a detailed approach to the enrichment of differentially loaded ferritin molecules by common biophysical techniques such as size exclusion chromatography and preparative ultracentrifugation, and characterize these preparations by dynamic light scattering, and analytical ultracentrifugation. We demonstrate a combination of methods to standardize an approach for determining the chemical load of nearly any particle, including nanoparticles and metal colloids. Purification and characterization of iron content in monodisperse ferritin species is particularly critical for several applications in nanomaterial science.

  6. Fitting population models from field data

    USGS Publications Warehouse

    Emlen, J.M.; Freeman, D.C.; Kirchhoff, M.D.; Alados, C.L.; Escos, J.; Duda, J.J.

    2003-01-01

    The application of population and community ecology to solving real-world problems requires population and community dynamics models that reflect the myriad patterns of interaction among organisms and between the biotic and physical environments. Appropriate models are not hard to construct, but the experimental manipulations needed to evaluate their defining coefficients are often both time consuming and costly, and sometimes environmentally destructive, as well. In this paper we present an empirical approach for finding the coefficients of broadly inclusive models without the need for environmental manipulation, demonstrate the approach with both an animal and a plant example, and suggest possible applications. Software has been developed, and is available from the senior author, with a manual describing both field and analytic procedures.

  7. Designing Flavoprotein-GFP Fusion Probes for Analyte-Specific Ratiometric Fluorescence Imaging.

    PubMed

    Hudson, Devin A; Caplan, Jeffrey L; Thorpe, Colin

    2018-02-20

    The development of genetically encoded fluorescent probes for analyte-specific imaging has revolutionized our understanding of intracellular processes. Current classes of intracellular probes depend on the selection of binding domains that either undergo conformational changes on analyte binding or can be linked to thiol redox chemistry. Here we have designed novel probes by fusing a flavoenzyme, whose fluorescence is quenched on reduction by the analyte of interest, with a GFP domain to allow for rapid and specific ratiometric sensing. Two flavoproteins, Escherichia coli thioredoxin reductase and Saccharomyces cerevisiae lipoamide dehydrogenase, were successfully developed into thioredoxin and NAD + /NADH specific probes, respectively, and their performance was evaluated in vitro and in vivo. A flow cell format, which allowed dynamic measurements, was utilized in both bacterial and mammalian systems. In E. coli the first reported intracellular steady-state of the cytoplasmic thioredoxin pool was measured. In HEK293T mammalian cells, the steady-state cytosolic ratio of NAD + /NADH induced by glucose was determined. These genetically encoded fluorescent constructs represent a modular approach to intracellular probe design that should extend the range of metabolites that can be quantitated in live cells.

  8. Assessing Measurement Invariance for Spanish Sentence Repetition and Morphology Elicitation Tasks.

    PubMed

    Kapantzoglou, Maria; Thompson, Marilyn S; Gray, Shelley; Restrepo, M Adelaida

    2016-04-01

    The purpose of this study was to evaluate evidence supporting the construct validity of two grammatical tasks (sentence repetition, morphology elicitation) included in the Spanish Screener for Language Impairment in Children (Restrepo, Gorin, & Gray, 2013). We evaluated if the tasks measured the targeted grammatical skills in the same way across predominantly Spanish-speaking children with typical language development and those with primary language impairment. A multiple-group, confirmatory factor analytic approach was applied to examine factorial invariance in a sample of 307 predominantly Spanish-speaking children (177 with typical language development; 130 with primary language impairment). The 2 newly developed grammatical tasks were modeled as measures in a unidimensional confirmatory factor analytic model along with 3 well-established grammatical measures from the Clinical Evaluation of Language Fundamentals-Fourth Edition, Spanish (Wiig, Semel, & Secord, 2006). Results suggest that both new tasks measured the construct of grammatical skills for both language-ability groups in an equivalent manner. There was no evidence of bias related to children's language status for the Spanish Screener for Language Impairment in Children Sentence Repetition or Morphology Elicitation tasks. Results provide support for the validity of the new tasks as measures of grammatical skills.

  9. Novel approaches to the construction of miniaturized analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Porter, Marc D.; Otoole, Ronald P.; Coldiron, Shelley J.; Deninger, William D.; Deinhammer, Randall S.; Burns, Stanley G.; Bastiaans, Glenn J.; Braymen, Steve D.; Shanks, Howard R.

    1992-01-01

    This paper focuses on the design, construction, preliminary testing, and potential applications of three forms of miniaturized analytical instrumentation. The first is an optical fiber instrument for monitoring pH and other cations in aqueous solutions. The instrument couples chemically selective indicators that were immobilized at porous polymeric films with a hardware package that provides the excitation light source, required optical components, and detection and data processing hardware. The second is a new form of a piezoelectric mass sensor. The sensor was fabricated by the deposition of a thin (5.5 micron) film of piezoelectric aluminum nitride (AIN). The completed deposition process yields a thin film resonator (TFR) that is shaped as a 400 micron square and supports a standing bulk acoustic wave in a longitudinal mode at frequencies of approx. 1 GHz. Various deposition and vapor sorption studies indicate that the mass sensitivity of the TFR's rival those of the most sensitive mass sensors currently available, though offering such performance in a markedly smaller device. The third couples a novel form of liquid chromatography with microlithographic miniaturization techniques. The status of the miniaturization effort, the goal of which is to achieve chip-scale separations, is briefly discussed.

  10. Shear joint capability versus bolt clearance

    NASA Technical Reports Server (NTRS)

    Lee, H. M.

    1992-01-01

    The results of a conservative analysis approach into the determination of shear joint strength capability for typical space-flight hardware as a function of the bolt-hole clearance specified in the design are presented. These joints are comprised of high-strength steel fasteners and abutments constructed of aluminum alloys familiar to the aerospace industry. A general analytical expression was first arrived at which relates bolt-hole clearance to the bolt shear load required to place all joint fasteners into a shear transferring position. Extension of this work allowed the analytical development of joint load capability as a function of the number of fasteners, shear strength of the bolt, bolt-hole clearance, and the desired factor of safety. Analysis results clearly indicate that a typical space-flight hardware joint can withstand significant loading when less than ideal bolt hole clearances are used in the design.

  11. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  12. A complementary marriage of perspectives: understanding organizational social context using mixed methods.

    PubMed

    Beidas, Rinad S; Wolk, Courtney L Benjamin; Walsh, Lucia M; Evans, Arthur C; Hurford, Matthew O; Barg, Frances K

    2014-11-23

    Organizational factors impact the delivery of mental health services in community settings. Mixed-methods analytic approaches have been recommended, though little research within implementation science has explicitly compared inductive and deductive perspectives to understand their relative value in understanding the same constructs. The purpose of our study is to use two different paradigmatic approaches to deepen our understanding of organizational social context. We accomplish this by using a mixed-methods approach in an investigation of organizational social context in community mental health clinics. Nineteen agencies, representing 23 sites, participated. Enrolled participants included 130 therapists, 36 supervisors, and 22 executive administrators. Quantitative data was obtained via the Organizational Social Context (OSC) measure. Qualitative data, comprised of direct observation with spot sampling generated from agency visits, was coded using content analysis and grounded theory. The present study examined elements of organizational social context that would have been missed if only quantitative data had been obtained and utilized mixed methods to investigate if stratifying observations based on quantitative ratings from the OSC resulted in the emergence of differential themes. Four of the six OSC constructs were commonly observed in field observations (i.e., proficiency, rigidity, functionality, stress), while the remaining two constructs were not frequently observed (i.e., resistance, engagement). Constructs emerged related to organizational social context that may have been missed if only quantitative measurement was employed, including those around the physical environment, commentary about evidence-based practice initiatives, leadership, cultural diversity, distrust, and affect. Stratifying agencies by "best," "average," and "worst" organizational social context impacted interpretation for three constructs (affect, stress, and leadership). Results support the additive value of integrating inductive and deductive perspectives in implementation science research. This synthesis of approaches facilitated a more comprehensive understanding and interpretation of the findings than would have been possible if either methodology had been employed in isolation.

  13. Femtogram detection of explosive nitroaromatics: fluoranthene-based fluorescent chemosensors.

    PubMed

    Venkatramaiah, N; Kumar, Shiv; Patil, Satish

    2012-11-12

    Herein we report a novel fluoranthene-based fluorescent fluorophore 7,10-bis(4-bromophenyl)-8,9-bis[4-(hexyloxy)phenyl]fluoranthene (S(3)) and its remarkable properties in applications of explosive detection. The sensitivity towards the detection of nitroaromatics (NACs) was evaluated through fluorescence quenching in solution, vapor, and contact mode approaches. The contact mode approach using thin-layer silica chromatographic plates exhibited a femtogram (1.15 fg cm(-2)) detection limit for trinitrotoluene (TNT) and picric acid (PA), whereas the solution-phase quenching showed PA detection at the 2-20 ppb level. Fluorescence lifetime measurements revealed that the quenching is static in nature and the quenching process is fully reversible. Binding energies between model binding sites of the S(3) and analyte compounds reveal that analyte molecules enter into the cavity created by substituted phenyl rings of fluoranthene and are stabilized by strong intermolecular interactions with alkyl chains. It is anticipated that the sensor S(3) could be a promising material for the construction of portable optical devices for the detection of onsite explosive nitroaromatics. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  15. A Single-Molecule Barcoding System using Nanoslits for DNA Analysis

    NASA Astrophysics Data System (ADS)

    Jo, Kyubong; Schramm, Timothy M.; Schwartz, David C.

    Single DNA molecule approaches are playing an increasingly central role in the analytical genomic sciences because single molecule techniques intrinsically provide individualized measurements of selected molecules, free from the constraints of bulk techniques, which blindly average noise and mask the presence of minor analyte components. Accordingly, a principal challenge that must be addressed by all single molecule approaches aimed at genome analysis is how to immobilize and manipulate DNA molecules for measurements that foster construction of large, biologically relevant data sets. For meeting this challenge, this chapter discusses an integrated approach for microfabricated and nanofabricated devices for the manipulation of elongated DNA molecules within nanoscale geometries. Ideally, large DNA coils stretch via nanoconfinement when channel dimensions are within tens of nanometers. Importantly, stretched, often immobilized, DNA molecules spanning hundreds of kilobase pairs are required by all analytical platforms working with large genomic substrates because imaging techniques acquire sequence information from molecules that normally exist in free solution as unrevealing random coils resembling floppy balls of yarn. However, nanoscale devices fabricated with sufficiently small dimensions fostering molecular stretching make these devices impractical because of the requirement of exotic fabrication technologies, costly materials, and poor operational efficiencies. In this chapter, such problems are addressed by discussion of a new approach to DNA presentation and analysis that establishes scaleable nanoconfinement conditions through reduction of ionic strength; stiffening DNA molecules thus enabling their arraying for analysis using easily fabricated devices that can also be mass produced. This new approach to DNA nanoconfinement is complemented by the development of a novel labeling scheme for reliable marking of individual molecules with fluorochrome labels, creating molecular barcodes, which are efficiently read using fluorescence resonance energy transfer techniques for minimizing noise from unincorporated labels. As such, our integrative approach for the realization of genomic analysis through nanoconfinement, named nanocoding, was demonstrated through the barcoding and mapping of bacterial artificial chromosomal molecules, thereby providing the basis for a high-throughput platform competent for whole genome investigations.

  16. Self-consistent construction of virialized wave dark matter halos

    NASA Astrophysics Data System (ADS)

    Lin, Shan-Chang; Schive, Hsi-Yu; Wong, Shing-Kwong; Chiueh, Tzihong

    2018-05-01

    Wave dark matter (ψ DM ), which satisfies the Schrödinger-Poisson equation, has recently attracted substantial attention as a possible dark matter candidate. Numerical simulations have, in the past, provided a powerful tool to explore this new territory of possibility. Despite their successes in revealing several key features of ψ DM , further progress in simulations is limited, in that cosmological simulations so far can only address formation of halos below ˜2 ×1011 M⊙ and substantially more massive halos have become computationally very challenging to obtain. For this reason, the present work adopts a different approach in assessing massive halos by constructing wave-halo solutions directly from the wave distribution function. This approach bears certain similarities with the analytical construction of the particle-halo (cold dark matter model). Instead of many collisionless particles, one deals with one single wave that has many noninteracting eigenstates. The key ingredient in the wave-halo construction is the distribution function of the wave power, and we use several halos produced by structure formation simulations as templates to determine the wave distribution function. Among different models, we find the fermionic King model presents the best fits and we use it for our wave-halo construction. We have devised an iteration method for constructing the nonlinear halo and demonstrate its stability by three-dimensional simulations. A Milky Way-sized halo has also been constructed, and the inner halo is found to be flatter than the NFW profile. These wave-halos have small-scale interferences both in space and time producing time-dependent granules. While the spatial scale of granules varies little, the correlation time is found to increase with radius by 1 order of magnitude across the halo.

  17. Characterizing student navigation in educational multiuser virtual environments: A case study using data from the River City project

    NASA Astrophysics Data System (ADS)

    Dukas, Georg

    Though research in emerging technologies is vital to fulfilling their incredible potential for educational applications, it is often fraught with analytic challenges related to large datasets. This thesis explores these challenges in researching multiuser virtual environments (MUVEs). In a MUVE, users assume a persona and traverse a virtual space often depicted as a physical world, interacting with other users and digital artifacts. As students participate in MUVE-based curricula, detailed records of their paths through the virtual world are typically collected in event logs. Although many studies have demonstrated the instructional power of MUVEs (e.g., Barab, Hay, Barnett, & Squire, 2001; Ketelhut, Dede, Clarke, Nelson, & Bowman, 2008), none have successfully quantified these student paths for analysis in the aggregate. This thesis constructs several frameworks for conducting research involving student navigational choices in MUVEs based on a case study of data generated from the River City project. After providing a context for the research and an introduction to the River City dataset, the first part of this thesis explores the issues associated with data compression and presents a grounded theory approach (Glaser & Strauss, 1967) to the cleaning, compacting, and coding or MUVE datasets. In summary of this section, I discuss the implication of preparation choices for further analysis. Second, two conceptually different approaches to analyzing behavioral sequences are investigated. For each approach, a theoretical context, description of possible exploratory and confirmatory methods, and illustrative examples from River City are provided. The thesis then situates these specific analytic approaches within the constellation of possible research utilizing MUVE event log data. Finally, based on the lessons of River City and the investigation of a spectrum of possible event logs, a set of design heuristics for data collection in MUVEs is constructed and a possible future for research in these environments is envisioned.

  18. Semigroup theory and numerical approximation for equations in linear viscoelasticity

    NASA Technical Reports Server (NTRS)

    Fabiano, R. H.; Ito, K.

    1990-01-01

    A class of abstract integrodifferential equations used to model linear viscoelastic beams is investigated analytically, applying a Hilbert-space approach. The basic equation is rewritten as a Cauchy problem, and its well-posedness is demonstrated. Finite-dimensional subspaces of the state space and an estimate of the state operator are obtained; approximation schemes for the equations are constructed; and the convergence is proved using the Trotter-Kato theorem of linear semigroup theory. The actual convergence behavior of different approximations is demonstrated in numerical computations, and the results are presented in tables.

  19. Placing Families in Context: Challenges for Cross-National Family Research

    PubMed Central

    Yu, Wei-hsin

    2015-01-01

    Cross-national comparisons constitute a valuable strategy to assess how broader cultural, political, and institutional contexts shape family outcomes. One typical approach of cross-national family research is to use comparable data from a limited number of countries, fit similar regression models for each country, and compare results across country-specific models. Increasingly, researchers are adopting a second approach, which requires merging data from many more societies and testing multilevel models using the pooled sample. Although the second approach has the advantage of allowing direct estimates of the effects of nation-level characteristics, it is more likely to suffer from the problems of omitted-variable bias, influential cases, and measurement and construct nonequivalence. I discuss ways to improve the first approach's ability to infer macrolevel influences, as well as how to deal with challenges associated with the second one. I also suggest choosing analytical strategies according to whether the data meet multilevel models’ assumptions. PMID:25999603

  20. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  1. RT-18: Value of Flexibility. Phase 1

    DTIC Science & Technology

    2010-09-25

    an analytical framework based on sound mathematical constructs. A review of the current state-of-the-art showed that there is little unifying theory...framework that is mathematically consistent, domain independent and applicable under varying information levels. This report presents our advances in...During this period, we also explored the development of an analytical framework based on sound mathematical constructs. A review of the current state

  2. Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions

    DTIC Science & Technology

    2014-12-05

    test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions

  3. Development of an efficient signal amplification strategy for label-free enzyme immunoassay using two site-specific biotinylated recombinant proteins.

    PubMed

    Tang, Jin-Bao; Tang, Ying; Yang, Hong-Ming

    2015-02-15

    Constructing a recombinant protein between a reporter enzyme and a detector protein to produce a homogeneous immunological reagent is advantageous over random chemical conjugation. However, the approach hardly recombines multiple enzymes in a difunctional fusion protein, which results in insufficient amplification of the enzymatic signal, thereby limiting its application in further enhancement of analytical signal. In this study, two site-specific biotinylated recombinant proteins, namely, divalent biotinylated alkaline phosphatase (AP) and monovalent biotinylated ZZ domain, were produced by employing the Avitag-BirA system. Through the high streptavidin (SA)-biotin interaction, the divalent biotinylated APs were clustered in the SA-biotin complex and then incorporated with the biotinylated ZZ. This incorporation results in the formation of a functional macromolecule that involves numerous APs, thereby enhancing the enzymatic signal, and in the production of several ZZ molecules for the interaction with immunoglobulin G (IgG) antibody. The advantage of this signal amplification strategy is demonstrated through ELISA, in which the analytical signal was substantially enhanced, with a 32-fold increase in the detection sensitivity compared with the ZZ-AP fusion protein approach. The proposed immunoassay without chemical modification can be an alternative strategy to enhance the analytical signals in various applications involving immunosensors and diagnostic chips, given that the label-free IgG antibody is suitable for the ZZ protein. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A new oil/membrane approach for integrated sweat sampling and sensing: sample volumes reduced from μL's to nL's and reduction of analyte contamination from skin.

    PubMed

    Peng, R; Sonner, Z; Hauke, A; Wilder, E; Kasting, J; Gaillard, T; Swaille, D; Sherman, F; Mao, X; Hagen, J; Murdock, R; Heikenfeld, J

    2016-11-01

    Wearable sweat biosensensing technology has dominantly relied on techniques which place planar-sensors or fluid-capture materials directly onto the skin surface. This 'on-skin' approach can result in sample volumes in the μL regime, due to the roughness of skin and/or due to the presence of hair. Not only does this increase the required sampling time to 10's of minutes or more, but it also increases the time that sweat spends on skin and therefore increases the amount of analyte contamination coming from the skin surface. Reported here is a first demonstration of a new paradigm in sweat sampling and sensing, where sample volumes are reduced from the μL's to nL's regime, and where analyte contamination from skin is reduced or even eliminated. A micro-porous membrane is constructed such that it is porous to sweat only. To complete a working device, first placed onto skin is a cosmetic-grade oil, secondly this membrane, and thirdly the sensors. As a result, spreading of sweat is isolated to only regions above the sweat glands before it reaches the sensors. Best case sampling intervals are on the order of several minutes, and the majority of hydrophilic (low oil solubility) contaminants from the skin surface are blocked. In vitro validation of this new approach is performed with an improved artificial skin including human hair. In vivo tests show strikingly consistent results, and reveal that the oil/membrane is robust enough to even allow horizontal sliding of a sensor.

  5. Reconceptualizing 'extremism' and 'moderation': from categories of analysis to categories of practice in the construction of collective identity.

    PubMed

    Hopkins, Nick; Kahani-Hopkins, Vered

    2009-03-01

    Much psychological research employs the categories of extremism and moderation as categories of analysis (e.g. to identify the psychological bases for, and consequences of, holding certain positions). This paper argues these categorizations inevitably reflect one's values and taken-for-granted assumptions about social reality and that their use as analytic categories limits our ability to explore what is really important: social actors' own constructions of social reality. In turn we argue that if we are to focus on this latter, there may be merit in exploring how social actors themselves use the categories of moderation and extremism to construct their own terms of reference. That is we propose to re-conceptualize the categories of moderation and extremism as categories of practice rather than analysis. The utility of this approach is illustrated with qualitative data. We argue that these data illustrate the importance of respecting social actors' own constructions of social reality (rather than imposing our own). Moreover, we argue that categories of moderation and extremism may be employed by social actors in diverse ways to construct different terms of reference and so recruit support for different identity-related projects.

  6. Simple construct evaluation with latent class analysis: An investigation of Facebook addiction and the development of a short form of the Facebook Addiction Test (F-AT).

    PubMed

    Dantlgraber, Michael; Wetzel, Eunike; Schützenberger, Petra; Stieger, Stefan; Reips, Ulf-Dietrich

    2016-09-01

    In psychological research, there is a growing interest in using latent class analysis (LCA) for the investigation of quantitative constructs. The aim of this study is to illustrate how LCA can be applied to gain insights on a construct and to select items during test development. We show the added benefits of LCA beyond factor-analytic methods, namely being able (1) to describe groups of participants that differ in their response patterns, (2) to determine appropriate cutoff values, (3) to evaluate items, and (4) to evaluate the relative importance of correlated factors. As an example, we investigated the construct of Facebook addiction using the Facebook Addiction Test (F-AT), an adapted version of the Internet Addiction Test (I-AT). Applying LCA facilitates the development of new tests and short forms of established tests. We present a short form of the F-AT based on the LCA results and validate the LCA approach and the short F-AT with several external criteria, such as chatting, reading newsfeeds, and posting status updates. Finally, we discuss the benefits of LCA for evaluating quantitative constructs in psychological research.

  7. Analytic model of a multi-electron atom

    NASA Astrophysics Data System (ADS)

    Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.

    2017-12-01

    A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.

  8. Instability of cooperative adaptive cruise control traffic flow: A macroscopic approach

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2013-10-01

    This paper proposes a macroscopic model to describe the operations of cooperative adaptive cruise control (CACC) traffic flow, which is an extension of adaptive cruise control (ACC) traffic flow. In CACC traffic flow a vehicle can exchange information with many preceding vehicles through wireless communication. Due to such communication the CACC vehicle can follow its leader at a closer distance than the ACC vehicle. The stability diagrams are constructed from the developed model based on the linear and nonlinear stability method for a certain model parameter set. It is found analytically that CACC vehicles enhance the stabilization of traffic flow with respect to both small and large perturbations compared to ACC vehicles. Numerical simulation is carried out to support our analytical findings. Based on the nonlinear stability analysis, we will show analytically and numerically that the CACC system better improves the dynamic equilibrium capacity over the ACC system. We have argued that in parallel to microscopic models for CACC traffic flow, the newly developed macroscopic will provide a complete insight into the dynamics of intelligent traffic flow.

  9. On Conducting Construct Validity Meta-Analyses for the Rorschach: A Reply to Tibon Czopp and Zeligman (2016).

    PubMed

    Mihura, Joni L; Meyer, Gregory J; Dumitrascu, Nicolae; Bombel, George

    2016-01-01

    We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.

  10. On approximately symmetric informationally complete positive operator-valued measures and related systems of quantum states

    NASA Astrophysics Data System (ADS)

    Klappenecker, Andreas; Rötteler, Martin; Shparlinski, Igor E.; Winterhof, Arne

    2005-08-01

    We address the problem of constructing positive operator-valued measures (POVMs) in finite dimension n consisting of n2 operators of rank one which have an inner product close to uniform. This is motivated by the related question of constructing symmetric informationally complete POVMs (SIC-POVMs) for which the inner products are perfectly uniform. However, SIC-POVMs are notoriously hard to construct and, despite some success of constructing them numerically, there is no analytic construction known. We present two constructions of approximate versions of SIC-POVMs, where a small deviation from uniformity of the inner products is allowed. The first construction is based on selecting vectors from a maximal collection of mutually unbiased bases and works whenever the dimension of the system is a prime power. The second construction is based on perturbing the matrix elements of a subset of mutually unbiased bases. Moreover, we construct vector systems in Cn which are almost orthogonal and which might turn out to be useful for quantum computation. Our constructions are based on results of analytic number theory.

  11. New Approach to a Practical Quartz Crystal Microbalance Sensor Utilizing an Inkjet Printing System

    PubMed Central

    Fuchiwaki, Yusuke; Tanaka, Masato; Makita, Yoji; Ooie, Toshihiko

    2014-01-01

    The present work demonstrates a valuable approach to developing quartz crystal microbalance (QCM) sensor units inexpensively for reliable determination of analytes. This QCM sensor unit is constructed by inkjet printing equipment utilizing background noise removal techniques. Inkjet printing equipment was chosen as an alternative to an injection pump in conventional flow-mode systems to facilitate the commercial applicability of these practical devices. The results demonstrate minimization of fluctuations from external influences, determination of antigen-antibody interactions in an inkjet deposition, and quantification of C-reactive protein in the range of 50–1000 ng(x000B7)mL−1. We thus demonstrate a marketable application of an inexpensive and easily available QCM sensor system. PMID:25360577

  12. Quasiperiodic one-dimensional photonic crystals with adjustable multiple photonic bandgaps.

    PubMed

    Vyunishev, Andrey M; Pankin, Pavel S; Svyakhovskiy, Sergey E; Timofeev, Ivan V; Vetrov, Stepan Ya

    2017-09-15

    We propose an elegant approach to produce photonic bandgap (PBG) structures with multiple photonic bandgaps by constructing quasiperiodic photonic crystals (QPPCs) composed of a superposition of photonic lattices with different periods. Generally, QPPC structures exhibit both aperiodicity and multiple PBGs due to their long-range order. They are described by a simple analytical expression, instead of quasiperiodic tiling approaches based on substitution rules. Here we describe the optical properties of QPPCs exhibiting two PBGs that can be tuned independently. PBG interband spacing and its depth can be varied by choosing appropriate reciprocal lattice vectors and their amplitudes. These effects are confirmed by the proof-of-concept measurements made for the porous silicon-based QPPC of the appropriate design.

  13. Construct Meaning in Multilevel Settings

    ERIC Educational Resources Information Center

    Stapleton, Laura M.; Yang, Ji Seung; Hancock, Gregory R.

    2016-01-01

    We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases,…

  14. Large Engine Technology Program. Task 21: Rich Burn Liner for Near Term Experimental Evaluations

    NASA Technical Reports Server (NTRS)

    Hautman, D. J.; Padget, F. C.; Kwoka, D.; Siskind, K. S.; Lohmann, R. P.

    2005-01-01

    The objective of the task reported herein, which was conducted as part of the NASA sponsored Large Engine Technology program, was to define and evaluate a near-term rich-zone liner construction based on currently available materials and fabrication processes for a Rich-Quench-Lean combustor. This liner must be capable of operation at the temperatures and pressures of simulated HSCT flight conditions but only needs sufficient durability for limited duration testing in combustor rigs and demonstrator engines in the near future. This must be achieved at realistic cooling airflow rates since the approach must not compromise the emissions, performance, and operability of the test combustors, relative to the product engine goals. The effort was initiated with an analytical screening of three different liner construction concepts. These included a full cylinder metallic liner and one with multiple segments of monolithic ceramic, both of which incorporated convective cooling on the external surface using combustor airflow that bypassed the rich zone. The third approach was a metallic platelet construction with internal convective cooling. These three metal liner/jacket combinations were tested in a modified version of an existing Rich-Quench-Lean combustor rig to obtain data for heat transfer model refinement and durability verification.

  15. Modifying the photoelectric behavior of bacteriorhodopsin by site-directed mutagenesis: electrochemical and genetic engineering approaches to molecular devices

    NASA Astrophysics Data System (ADS)

    Hong, F. T.; Hong, F. H.; Needleman, R. B.; Ni, B.; Chang, M.

    1992-07-01

    Bacteriorhodopsins (bR's) modified by substitution of the chromophore with synthetic vitamin A analogues or by spontaneous mutation have been reported as successful examples of using biomaterials to construct molecular optoelectronic devices. The operation of these devices depends on desirable optical properties derived from molecular engineering. This report examines the effect of site-directed mutagenesis on the photoelectric behavior of bR thin films with an emphasis on their application to the construction of molecular devices based on their unique photoelectric behavior. We examine the photoelectric signals induced by a microsecond light pulse in thin films which contain reconstituted oriented purple membrane sheets isolated from several mutant strains of Halobacterium halobium. A recently developed expression system is used to synthesize mutant bR's in their natural host, H. halobium. We then use a unique analytical method (tunable voltage clamp method) to investigate the effect of pH on the relaxation of two components of the photoelectric signals, B1 and B2. We found that for the four mutant bR's examined, the pH dependence of the B2 component varies significantly. Our results suggest that genetic engineering approaches can produce mutant bR's with altered photoelectric characteristics that can be exploited in the construction of devices.

  16. Data analytics and parallel-coordinate materials property charts

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.

    2018-01-01

    It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.

  17. Solution of the exact equations for three-dimensional atmospheric entry using directly matched asymptotic expansions

    NASA Technical Reports Server (NTRS)

    Busemann, A.; Vinh, N. X.; Culp, R. D.

    1976-01-01

    The problem of determining the trajectories, partially or wholly contained in the atmosphere of a spherical, nonrotating planet, is considered. The exact equations of motion for three-dimensional, aerodynamically affected flight are derived. Modified Chapman variables are introduced and the equations are transformed into a set suitable for analytic integration using asymptotic expansions. The trajectory is solved in two regions: the outer region, where the force may be considered a gravitational field with aerodynamic perturbations, and the inner region, where the force is predominantly aerodynamic, with gravity as a perturbation. The two solutions are matched directly. A composite solution, valid everywhere, is constructed by additive composition. This approach of directly matched asymptotic expansions applied to the exact equations of motion couched in terms of modified Chapman variables yields an analytical solution which should prove to be a powerful tool for aerodynamic orbit calculations.

  18. Assessing electronic health record systems in emergency departments: Using a decision analytic Bayesian model.

    PubMed

    Ben-Assuli, Ofir; Leshno, Moshe

    2016-09-01

    In the last decade, health providers have implemented information systems to improve accuracy in medical diagnosis and decision-making. This article evaluates the impact of an electronic health record on emergency department physicians' diagnosis and admission decisions. A decision analytic approach using a decision tree was constructed to model the admission decision process to assess the added value of medical information retrieved from the electronic health record. Using a Bayesian statistical model, this method was evaluated on two coronary artery disease scenarios. The results show that the cases of coronary artery disease were better diagnosed when the electronic health record was consulted and led to more informed admission decisions. Furthermore, the value of medical information required for a specific admission decision in emergency departments could be quantified. The findings support the notion that physicians and patient healthcare can benefit from implementing electronic health record systems in emergency departments. © The Author(s) 2015.

  19. Estimating estuarine salt intrusion using an analytical and a full hydrodynamic simulation - a comparison for the Ma Estuary

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc Anh; Cat Vu, Minh; Willems, Patrick; Monbaliu, Jaak

    2017-04-01

    Salt intrusion is the most acute problem for irrigation water quality in coastal regions during dry seasons. The use of numerical hydrodynamic models is widespread and has become the prevailing approach to simulate the salinity distribution in an estuary. Despite its power to estimate both spatial and temporal salinity variations along the estuary, this approach also has its drawbacks. The high computational cost and the need for detailed hydrological, bathymetric and tidal datasets, put some limits on the usability in particular case studies. In poor data environments, analytical salt intrusion models are more widely used as they require less data and have a further reduction of the computational effort. There are few studies however where a more comprehensive comparison is made between the performance of a numerical hydrodynamic and an analytical model. In this research the multi-channel Ma Estuary in Vietnam is considered as a case study. Both the analytical and the hydrodynamic simulation approaches have been applied and were found capable to mimic the longitudinal salt distribution along the estuary. The data to construct the MIKE11 model include observations provided by a network of fixed hydrological stations and the cross-section measurements along the estuary. The analytic model is developed in parallel but based on information obtained from the hydrological network only (typical for poor data environment). Note that the two convergence length parameters of this simplified model are usually extracted from topography data including cross-sectional area and width along the estuary. Furthermore, freshwater discharge data are needed but these are gauged further upstream outside of the tidal region and unable to reflect the individual flows entering the multi-channel estuary. In order to tackle the poor data environment limitations, a new approach was needed to calibrate the two estuary geometry parameters of the parsimonious salt intrusion model. Compared to the values based on a field survey for the estuary, the calibrated cross-sectional convergence length values are in very high agreement. By assuming a linear relation between inverses of the individual flows entering the estuary and inverses of the sum of flows gauged further upstream, the individual flows can be assessed. Evaluation on the modeling approaches at high water slack shows that the two modeling approaches have similar results. They explain salinity distribution along the Ma Estuary reasonably well with Nash-Sutcliffe efficiency values at gauging stations along the estuary of 0.50 or higher. These performances demonstrate the predictive power of the simplified salt intrusion model and of the proposed parameter/input estimation approach, even with the poorer data.

  20. Effect-Based Screening Methods for Water Quality Characterization Will Augment Conventional Analyte-by-Analyte Chemical Methods in Research As Well As Regulatory Monitoring

    EPA Science Inventory

    Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...

  1. Innovative residential floor construction: Structural evaluation of steel joists with pre-formed web openings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elhajj, N.R.

    1999-03-01

    Since 1992, the US Department of Housing and Urban Development has sponsored numerous studies to identify, evaluate, and implement innovative structural materials, such as cold-formed steel (CFS), in the residential market. The use of CFS is still very limited, partly because steel is not being effectively integrated into conventional home construction. One of the major barriers to the use of CFS floor joists is the impact it has on placement of large waste drains and ductwork installed in floor systems. This report provides an overview of tests conducted by the NAHB to integrate these systems with CFS. A brief literaturemore » review of relevant work followed by a detailed overview of the experimental and analytical approach are also provided. The report recommends adoption of the research findings in residential and commercial applications.« less

  2. Recursive linearization of multibody dynamics equations of motion

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Chieh; Yae, K. Harold

    1989-01-01

    The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.

  3. GLOBAL PROPERTIES OF FULLY CONVECTIVE ACCRETION DISKS FROM LOCAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodo, G.; Ponzo, F.; Rossi, P.

    2015-08-01

    We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction ismore » analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.« less

  4. Periodic wave, breather wave and travelling wave solutions of a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation in fluids or plasmas

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Qiang; Gao, Yi-Tian; Jia, Shu-Liang; Huang, Qian-Min; Lan, Zhong-Zhou

    2016-11-01

    In this paper, a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation is investigated, which has been presented as a model for the shallow water wave in fluids or the electrostatic wave potential in plasmas. By virtue of the binary Bell polynomials, the bilinear form of this equation is obtained. With the aid of the bilinear form, N -soliton solutions are obtained by the Hirota method, periodic wave solutions are constructed via the Riemann theta function, and breather wave solutions are obtained according to the extended homoclinic test approach. Travelling waves are constructed by the polynomial expansion method as well. Then, the relations between soliton solutions and periodic wave solutions are strictly established, which implies the asymptotic behaviors of the periodic waves under a limited procedure. Furthermore, we obtain some new solutions of this equation by the standard extended homoclinic test approach. Finally, we give a generalized form of this equation, and find that similar analytical solutions can be obtained from the generalized equation with arbitrary coefficients.

  5. Construct validity of the Beck Hopelessness Scale (BHS) among university students: A multitrait-multimethod approach.

    PubMed

    Boduszek, Daniel; Dhingra, Katie

    2016-10-01

    There is considerable debate about the underlying factor structure of the Beck Hopelessness Scale (BHS) in the literature. An established view is that it reflects a unitary or bidimensional construct in nonclinical samples. There are, however, reasons to reconsider this conceptualization. Based on previous factor analytic findings from both clinical and nonclinical studies, the aim of the present study was to compare 16 competing models of the BHS in a large university student sample (N = 1, 733). Sixteen distinct factor models were specified and tested using conventional confirmatory factor analytic techniques, along with confirmatory bifactor modeling. A 3-factor solution with 2 method effects (i.e., a multitrait-multimethod model) provided the best fit to the data. The reliability of this conceptualization was supported by McDonald's coefficient omega and the differential relationships exhibited between the 3 hopelessness factors ("feelings about the future," "loss of motivation," and "future expectations") and measures of goal disengagement, brooding rumination, suicide ideation, and suicide attempt history. The results provide statistical support for a 3-trait and 2-method factor model, and hence the 3 dimensions of hopelessness theorized by Beck. The theoretical and methodological implications of these findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Distribution factors for construction loads and girder capacity equations, final report.

    DOT National Transportation Integrated Search

    2017-03-01

    During the process of constructing a highway bridge, there are several construction stages that warrant : consideration from a structural safety and design perspective. The first objective of the present study was to use analytical : models of prestr...

  7. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  8. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations

    PubMed Central

    Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.

    2013-01-01

    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem. PMID:23457427

  9. Beyond Authoritarian Personality: The Culture-Inclusive Theory of Chinese Authoritarian Orientation.

    PubMed

    Chien, Chin-Lung

    2016-01-01

    In a dyad interaction, respecting and obeying those with high status (authority) is highly valued in Chinese societies. Regarding explicit behaviors, Chinese people usually show respect to and obey authority, which we call authoritarian orientation. Previous literature has indicated that Chinese people have a high degree of authoritarian personality, which was considered a national character. However, under Confucian relationalism (Hwang, 2012a), authoritarian orientation is basically an ethical issue, and thus, should not be reduced to the contention of authoritarian personality. Based on Yang's (1993) indigenous conceptualization, Chien (2013) took an emic bottom-up approach to construct an indigenous model of Chinese authoritarian orientation; it represents a "culture-inclusive theory." However, Chien's model lacks the role of agency or intentionality. To resolve this issue and to achieve the epistemological goal of indigenous psychology (that is, "one mind, many mentalities"), this paper took the "cultural system approach" (Hwang, 2015b) to construct a culture-inclusive theory of authoritarian orientation in order to represent the universal mind of human beings as well as the mentalities of people in a particular culture. Two theories that reflect the universal mind, the "Face and Favor model" (Hwang, 1987) and the "Mandala Model of Self" (Hwang, 2011a,c), were used as analytical frameworks for interpreting Chien's original model. The process of constructing the culture-inclusive theory of authoritarian orientation may represent a paradigm for the construction of indigenous culture-inclusive theories while inspiring further development. Some future research directions are proposed herein.

  10. Statically determined slip-line field solution for the axial forming force estimation in the radial-axial ring rolling process

    NASA Astrophysics Data System (ADS)

    Quagliato, Luca; Berti, Guido A.

    2017-10-01

    In this paper, a statically determined slip-line solution algorithm is proposed for the calculation of the axial forming force in the radial-axial ring rolling process of flat rings. The developed solution is implemented in an Excel spreadsheet for the construction of the slip-line field and the calculation of the pressure factor to be used in the force model. The comparison between analytical solution and authors' FE simulation allows stating that the developed model supersedes the previous literature ones and proves the reliability of the proposed approach.

  11. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  12. Self-Powered Wireless Affinity-Based Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled RFID Antennas.

    PubMed

    Yuan, Mingquan; Alocilja, Evangelyn C; Chakrabartty, Shantanu

    2016-08-01

    This paper presents a wireless, self-powered, affinity-based biosensor based on the integration of paper-based microfluidics with our previously reported method for self-assembling radio-frequency (RF) antennas. At the core of the proposed approach is a silver-enhancement technique that grows portions of a RF antenna in regions where target antigens hybridize with target specific affinity probes. The hybridization regions are defined by a network of nitrocellulose based microfluidic channels which implement a self-powered approach to sample the reagent and control its flow and mixing. The integration substrate for the biosensor has been constructed using polyethylene and the patterning of the antenna on the substrate has been achieved using a low-cost ink-jet printing technique. The substrate has been integrated with passive radio-frequency identification (RFID) tags to demonstrate that the resulting sensor-tag can be used for continuous monitoring in a food supply-chain where direct measurement of analytes is typically considered to be impractical. We validate the proof-of-concept operation of the proposed sensor-tag using IgG as a model analyte and using a 915 MHz Ultra-high-frequency (UHF) RFID tagging technology.

  13. Predicting the behavior of microfluidic circuits made from discrete elements

    PubMed Central

    Bhargava, Krisna C.; Thompson, Bryant; Iqbal, Danish; Malmstadt, Noah

    2015-01-01

    Microfluidic devices can be used to execute a variety of continuous flow analytical and synthetic chemistry protocols with a great degree of precision. The growing availability of additive manufacturing has enabled the design of microfluidic devices with new functionality and complexity. However, these devices are prone to larger manufacturing variation than is typical of those made with micromachining or soft lithography. In this report, we demonstrate a design-for-manufacturing workflow that addresses performance variation at the microfluidic element and circuit level, in context of mass-manufacturing and additive manufacturing. Our approach relies on discrete microfluidic elements that are characterized by their terminal hydraulic resistance and associated tolerance. Network analysis is employed to construct simple analytical design rules for model microfluidic circuits. Monte Carlo analysis is employed at both the individual element and circuit level to establish expected performance metrics for several specific circuit configurations. A protocol based on osmometry is used to experimentally probe mixing behavior in circuits in order to validate these approaches. The overall workflow is applied to two application circuits with immediate use at on the bench-top: series and parallel mixing circuits that are modularly programmable, virtually predictable, highly precise, and operable by hand. PMID:26516059

  14. Ruptured thought: rupture as a critical attitude to nursing research.

    PubMed

    Beedholm, Kirsten; Lomborg, Kirsten; Frederiksen, Kirsten

    2014-04-01

    In this paper, we introduce the notion of ‘rupture’ from the French philosopher Michel Foucault, whose studies of discourse and governmentality have become prominent within nursing research during the last 25 years. We argue that a rupture perspective can be helpful for identifying and maintaining a critical potential within nursing research. The paper begins by introducing rupture as an inheritance from the French epistemological tradition. It then describes how rupture appears in Foucault's works, as both an overall philosophical approach and as an analytic tool in his historical studies. Two examples of analytical applications of rupture are elaborated. In the first example, rupture has inspired us to make an effort to seek alternatives to mainstream conceptions of the phenomenon under study. In the second example, inspired by Foucault's work on discontinuity, we construct a framework for historical epochs in nursing history. The paper concludes by discussing the potential of the notion of rupture as a response to the methodological concerns regarding the use of Foucault-inspired discourse analysis within nursing research. We agree with the critique of Cheek that the critical potential of discourse analysis is at risk of being undermined by research that tends to convert the approach into a fixed method.

  15. A Hamiltonian approach to Thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldiotti, M.C., E-mail: baldiotti@uel.br; Fresneda, R., E-mail: rodrigo.fresneda@ufabc.edu.br; Molina, C., E-mail: cmolina@usp.br

    In the present work we develop a strictly Hamiltonian approach to Thermodynamics. A thermodynamic description based on symplectic geometry is introduced, where all thermodynamic processes can be described within the framework of Analytic Mechanics. Our proposal is constructed on top of a usual symplectic manifold, where phase space is even dimensional and one has well-defined Poisson brackets. The main idea is the introduction of an extended phase space where thermodynamic equations of state are realized as constraints. We are then able to apply the canonical transformation toolkit to thermodynamic problems. Throughout this development, Dirac’s theory of constrained systems is extensivelymore » used. To illustrate the formalism, we consider paradigmatic examples, namely, the ideal, van der Waals and Clausius gases. - Highlights: • A strictly Hamiltonian approach to Thermodynamics is proposed. • Dirac’s theory of constrained systems is extensively used. • Thermodynamic equations of state are realized as constraints. • Thermodynamic potentials are related by canonical transformations.« less

  16. Testing the multidimensionality of the inventory of school motivation in a Dutch student sample.

    PubMed

    Korpershoek, Hanke; Xu, Kun; Mok, Magdalena Mo Ching; McInerney, Dennis M; van der Werf, Greetje

    2015-01-01

    A factor analytic and a Rasch measurement approach were applied to evaluate the multidimensional nature of the school motivation construct among more than 7,000 Dutch secondary school students. The Inventory of School Motivation (McInerney and Ali, 2006) was used, which intends to measure four motivation dimensions (mastery, performance, social, and extrinsic motivation), each comprising of two first-order factors. One unidimensional model and three multidimensional models (4-factor, 8-factor, higher order) were fit to the data. Results of both approaches showed that the multidimensional models validly represented the school motivation among Dutch secondary school pupils, whereas model fit of the unidimensional model was poor. The differences in model fit between the three multidimensional models were small, although a different model was favoured by the two approaches. The need for improvement of some of the items and the need to increase measurement precision of several first-order factors are discussed.

  17. Estimation of treatment effect in a subpopulation: An empirical Bayes approach.

    PubMed

    Shen, Changyu; Li, Xiaochun; Jeong, Jaesik

    2016-01-01

    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.

  18. Functional Interfaces Constructed by Controlled/Living Radical Polymerization for Analytical Chemistry.

    PubMed

    Wang, Huai-Song; Song, Min; Hang, Tai-Jun

    2016-02-10

    The high-value applications of functional polymers in analytical science generally require well-defined interfaces, including precisely synthesized molecular architectures and compositions. Controlled/living radical polymerization (CRP) has been developed as a versatile and powerful tool for the preparation of polymers with narrow molecular weight distributions and predetermined molecular weights. Among the CRP system, atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) are well-used to develop new materials for analytical science, such as surface-modified core-shell particles, monoliths, MIP micro- or nanospheres, fluorescent nanoparticles, and multifunctional materials. In this review, we summarize the emerging functional interfaces constructed by RAFT and ATRP for applications in analytical science. Various polymers with precisely controlled architectures including homopolymers, block copolymers, molecular imprinted copolymers, and grafted copolymers were synthesized by CRP methods for molecular separation, retention, or sensing. We expect that the CRP methods will become the most popular technique for preparing functional polymers that can be broadly applied in analytical chemistry.

  19. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    PubMed Central

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  20. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives. Contribution of the IMIA Social Media Working Group.

    PubMed

    Hansen, M M; Miron-Shatz, T; Lau, A Y S; Paton, C

    2014-08-15

    As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to "small data" would also be useful.

  1. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    PubMed

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Horizon-absorbed energy flux in circularized, nonspinning black-hole binaries, and its effective-one-body representation

    NASA Astrophysics Data System (ADS)

    Nagar, Alessandro; Akcay, Sarp

    2012-02-01

    We propose, within the effective-one-body approach, a new, resummed analytical representation of the gravitational-wave energy flux absorbed by a system of two circularized (nonspinning) black holes. This expression is such that it is well-behaved in the strong-field, fast-motion regime, notably up to the effective-one-body-defined last unstable orbit. Building conceptually upon the procedure adopted to resum the multipolar asymptotic energy flux, we introduce a multiplicative decomposition of the multipolar absorbed flux made by three factors: (i) the leading-order contribution, (ii) an “effective source” and (iii) a new residual amplitude correction (ρ˜ℓmH)2ℓ. In the test-mass limit, we use a frequency-domain perturbative approach to accurately compute numerically the horizon-absorbed fluxes along a sequence of stable and unstable circular orbits, and we extract from them the functions ρ˜ℓmH. These quantities are then fitted via rational functions. The resulting analytically represented test-mass knowledge is then suitably hybridized with lower-order analytical information that is valid for any mass ratio. This yields a resummed representation of the absorbed flux for a generic, circularized, nonspinning black-hole binary. Our result adds new information to the state-of-the-art calculation of the absorbed flux at fractional 5 post-Newtonian order [S. Taylor and E. Poisson, Phys. Rev. D 78, 084016 (2008)], which is recovered in the weak-field limit approximation by construction.

  3. Nonlinear whistler wave model for lion roars in the Earth's magnetosheath

    NASA Astrophysics Data System (ADS)

    Dwivedi, N. K.; Singh, S.

    2017-09-01

    In the present study, we construct a nonlinear whistler wave model to explain the magnetic field spectra observed for lion roars in the Earth's magnetosheath region. We use two-fluid theory and semi-analytical approach to derive the dynamical equation of whistler wave propagating along the ambient magnetic field. We examine the magnetic field localization of parallel propagating whistler wave in the intermediate beta plasma applicable to the Earth's magnetosheath. In addition, we investigate spectral features of the magnetic field fluctuations and the spectral slope value. The magnetic field spectrum obtained by semi-analytical approach shows a spectral break point and becomes steeper at higher wave numbers. The observations of IMP 6 plasma waves and magnetometer experiment reveal the existence of short period magnetic field fluctuations in the magnetosheath. The observation shows the broadband spectrum with a spectral slope of -4.5 superimposed with a narrow band peak. The broadband fluctuations appear due to the energy cascades attributed by low-frequency magnetohydrodynamic modes, whereas, a narrow band peak is observed due to the short period lion roars bursts. The energy spectrum predicted by the present theoretical model shows a similar broadband spectrum in the wave number domain with a spectral slope of -3.2, however, it does not show any narrow band peak. Further, we present a comparison between theoretical energy spectrum and the observed spectral slope in the frequency domain. The present semi-analytical model provides exposure to the whistler wave turbulence in the Earth's magnetosheath.

  4. Analytic energy gradients for the coupled-cluster singles and doubles method with the density-fitting approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozkaya, Uğur, E-mail: ugur.bozkaya@hacettepe.edu.tr; Department of Chemistry, Atatürk University, Erzurum 25240; Sherrill, C. David

    2016-05-07

    An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the “gradient terms”: computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbitalmore » (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C{sub 10}H{sub 22}), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.« less

  5. Validity of Particle-Counting Method Using Laser-Light Scattering for Detecting Platelet Aggregation in Diabetic Patients

    NASA Astrophysics Data System (ADS)

    Nakadate, Hiromichi; Sekizuka, Eiichi; Minamitani, Haruyuki

    We aimed to study the validity of a new analytical approach that reflected the phase from platelet activation to the formation of small platelet aggregates. We hoped that this new approach would enable us to use the particle-counting method with laser-light scattering to measure platelet aggregation in healthy controls and in diabetic patients without complications. We measured agonist-induced platelet aggregation for 10 min. Agonist was added to the platelet-rich plasma 1 min after measurement started. We compared the total scattered light intensity from small aggregates over a 10-min period (established analytical approach) and that over a 2-min period from 1 to 3 min after measurement started (new analytical approach). Consequently platelet aggregation in diabetics with HbA1c ≥ 6.5% was significantly greater than in healthy controls by both analytical approaches. However, platelet aggregation in diabetics with HbA1c < 6.5%, i.e. patients in the early stages of diabetes, was significantly greater than in healthy controls only by the new analytical approach, not by the established analytical approach. These results suggest that platelet aggregation as detected by the particle-counting method using laser-light scattering could be applied in clinical examinations by our new analytical approach.

  6. Numerical approach to constructing the lunar physical libration: results of the initial stage

    NASA Astrophysics Data System (ADS)

    Zagidullin, A.; Petrova, N.; Nefediev, Yu.; Usanin, V.; Glushkov, M.

    2015-10-01

    So called "main problem" it is taken as a model to develop the numerical approach in the theory of lunar physical libration. For the chosen model, there are both a good methodological basis and results obtained at the Kazan University as an outcome of the analytic theory construction. Results of the first stage in numerical approach are presented in this report. Three main limitation are taken to describe the main problem: -independent consideration of orbital and rotational motion of the Moon; - a rigid body model for the lunar body is taken and its dynamical figure is described by inertia ellipsoid, which gives us the mass distribution inside the Moon. - only gravitational interaction with the Earth and the Sun is considered. Development of selenopotential is limited on this stage by the second harmonic only. Inclusion of the 3-rd and 4-th order harmonics is the nearest task for the next stage.The full solution of libration problem consists of removing the below specified limitations: consideration of the fine effects, caused by planet perturbations, by visco-elastic properties of the lunar body, by the presence of a two-layer lunar core, by the Earth obliquity, by ecliptic rotation, if it is taken as a reference plane.

  7. Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures.

    PubMed

    Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean; Cheung, Kenneth C

    2017-03-01

    We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures.

  8. Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures

    PubMed Central

    Jenett, Benjamin; Calisch, Sam; Cellucci, Daniel; Cramer, Nick; Gershenfeld, Neil; Swei, Sean

    2017-01-01

    Abstract We describe an approach for the discrete and reversible assembly of tunable and actively deformable structures using modular building block parts for robotic applications. The primary technical challenge addressed by this work is the use of this method to design and fabricate low density, highly compliant robotic structures with spatially tuned stiffness. This approach offers a number of potential advantages over more conventional methods for constructing compliant robots. The discrete assembly reduces manufacturing complexity, as relatively simple parts can be batch-produced and joined to make complex structures. Global mechanical properties can be tuned based on sub-part ordering and geometry, because local stiffness and density can be independently set to a wide range of values and varied spatially. The structure's intrinsic modularity can significantly simplify analysis and simulation. Simple analytical models for the behavior of each building block type can be calibrated with empirical testing and synthesized into a highly accurate and computationally efficient model of the full compliant system. As a case study, we describe a modular and reversibly assembled wing that performs continuous span-wise twist deformation. It exhibits high performance aerodynamic characteristics, is lightweight and simple to fabricate and repair. The wing is constructed from discrete lattice elements, wherein the geometric and mechanical attributes of the building blocks determine the global mechanical properties of the wing. We describe the mechanical design and structural performance of the digital morphing wing, including their relationship to wind tunnel tests that suggest the ability to increase roll efficiency compared to a conventional rigid aileron system. We focus here on describing the approach to design, modeling, and construction as a generalizable approach for robotics that require very lightweight, tunable, and actively deformable structures. PMID:28289574

  9. Rhetoric as Reality Construction.

    ERIC Educational Resources Information Center

    Kneupper, Charles W.

    This essay provides an analytic development of a philosophy of rhetoric which focuses its concern on social reality. According to this philosophy, the activity of the human mind invents symbolic constructions of reality. Primary socialization is interpreted as a rhetorical process which tends to maintain prevailing reality constructions.…

  10. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Programming chemistry in DNA-addressable bioreactors

    PubMed Central

    Fellermann, Harold; Cardelli, Luca

    2014-01-01

    We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. PMID:25121647

  12. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries.

    PubMed

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-09-01

    Individual and organizational factors are the factors influencing traumatic occupational injuries. The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries' severity (P < 0.05). Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents' severity in large construction industries.

  13. Teaching Theory Construction With Initial Grounded Theory Tools: A Reflection on Lessons and Learning.

    PubMed

    Charmaz, Kathy

    2015-12-01

    This article addresses criticisms of qualitative research for spawning studies that lack analytic development and theoretical import. It focuses on teaching initial grounded theory tools while interviewing, coding, and writing memos for the purpose of scaling up the analytic level of students' research and advancing theory construction. Adopting these tools can improve teaching qualitative methods at all levels although doctoral education is emphasized here. What teachers cover in qualitative methods courses matters. The pedagogy presented here requires a supportive environment and relies on demonstration, collective participation, measured tasks, progressive analytic complexity, and accountability. Lessons learned from using initial grounded theory tools are exemplified in a doctoral student's coding and memo-writing excerpts that demonstrate progressive analytic development. The conclusion calls for increasing the number and depth of qualitative methods courses and for creating a cadre of expert qualitative methodologists. © The Author(s) 2015.

  14. Robust quantum control using smooth pulses and topological winding

    NASA Astrophysics Data System (ADS)

    Barnes, Edwin; Wang, Xin

    2015-03-01

    Perhaps the greatest challenge in achieving control of microscopic quantum systems is the decoherence induced by the environment, a problem which pervades experimental quantum physics and is particularly severe in the context of solid state quantum computing and nanoscale quantum devices because of the inherently strong coupling to the surrounding material. We present an analytical approach to constructing intrinsically robust driving fields which automatically cancel the leading-order noise-induced errors in a qubit's evolution exactly. We address two of the most common types of non-Markovian noise that arise in qubits: slow fluctuations of the qubit energy splitting and fluctuations in the driving field itself. We demonstrate our method by constructing robust quantum gates for several types of spin qubits, including phosphorous donors in silicon and nitrogen-vacancy centers in diamond. Our results constitute an important step toward achieving robust generic control of quantum systems, bringing their novel applications closer to realization. Work supported by LPS-CMTC.

  15. The path dependency theory: analytical framework to study institutional integration. The case of France.

    PubMed

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-06-30

    The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France.

  16. Blade Tip Rubbing Stress Prediction

    NASA Technical Reports Server (NTRS)

    Davis, Gary A.; Clough, Ray C.

    1991-01-01

    An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.

  17. ANALYTICAL ELEMENT MODELING OF COASTAL AQUIFERS

    EPA Science Inventory

    Four topics were studied concerning the modeling of groundwater flow in coastal aquifers with analytic elements: (1) practical experience was obtained by constructing a groundwater model of the shallow aquifers below the Delmarva Peninsula USA using the commercial program MVAEM; ...

  18. Analytical description of the ternary melt and solution crystallization with a non-linear phase diagram

    NASA Astrophysics Data System (ADS)

    Toropova, L. V.; Alexandrov, D. V.

    2018-05-01

    The directional solidification of a ternary system with an extended phase transition region is theoretically studied. A mathematical model is developed to describe quasi-stationary solidification, and its analytical solution is constructed with allowance for a nonlinear liquids line equation. We demonstrate that the phase diagram nonlinearity leads to substantial changes of analytical solutions.

  19. Learners' and Teachers' Perceptions of Learning Analytics (LA): A Case Study of Southampton Solent University (SSU)

    ERIC Educational Resources Information Center

    Khan, Osama

    2017-01-01

    This paper depicts a perceptual picture of learning analytics based on the understanding of learners and teachers at the SSU as a case study. The existing literature covers technical challenges of learning analytics (LA) and how it creates better social construct for enhanced learning support, however, there has not been adequate research on…

  20. The Construction of Pro-Science and Technology Discourse in Chinese Language Textbooks

    ERIC Educational Resources Information Center

    Liu, Yongbing

    2005-01-01

    This paper examines the pro-science and technology discourse constructed in Chinese language textbooks currently used for primary school students nationwide in China. By applying analytical techniques of critical discourse analysis (CDA), the paper critically investigates how the discourse is constructed and what ideological forces are manifested…

  1. Topics in elementary particle physics

    NASA Astrophysics Data System (ADS)

    Jin, Xiang

    The author of this thesis discusses two topics in elementary particle physics: n-ary algebras and their applications to M-theory (Part I), and functional evolution and Renormalization Group flows (Part II). In part I, Lie algebra is extended to four different n-ary algebraic structure: generalized Lie algebra, Filippov algebra, Nambu algebra and Nambu-Poisson tensor; though there are still many other n-ary algebras. A natural property of Generalized Lie algebras — the Bremner identity, is studied, and proved with a totally different method from its original version. We extend Bremner identity to n-bracket cases, where n is an arbitrary odd integer. Filippov algebras do not focus on associativity, and are defined by the Fundamental identity. We add associativity to Filippov algebras, and give examples of how to construct Filippov algebras from su(2), bosonic oscillator, Virasoro algebra. We try to include fermionic charges into the ternary Virasoro-Witt algebra, but the attempt fails because fermionic charges keep generating new charges that make the algebra not closed. We also study the Bremner identity restriction on Nambu algebras and Nambu-Poisson tensors. So far, the only example 3-algebra being used in physics is the BLG model with 3-algebra A4, describing two M2-branes interactions. Its extension with Nambu algebra, BLG-NB model, is believed to describe infinite M2-branes condensation. Also, there is another propose for M2-brane interactions, the ABJM model, which is constructed by ordinary Lie algebra. We compare the symmetry properties between them, and discuss the possible approaches to include these three models into a grand unification theory. In Part II, we give an approximate solution for Schroeder's equations, based on series and conjugation methods. We use the logistic map as an example, and demonstrate that this approximate solution converges to known analytical solutions around the fixed point, around which the approximate solution is constructed. Although the closed-form solutions for Schroeder's equations can not always be approached analytically, by fitting the approximation solutions, one can still obtain closed-form solutions sometimes. Based on Schroeder's theory, approximate solutions for trajectories, velocities and potentials can also be constructed. The approximate solution is significantly useful to calculate the beta function in renormalization group trajectory. By "wrapping" the series solutions with the conjugations from different inverse functions, we generate different branches of the trajectory, and construct a counterexample for a folk theorem about limited cycles.

  2. Project Summary. ANALYTICAL ELEMENT MODELING OF COASTAL AQUIFERS

    EPA Science Inventory

    Four topics were studied concerning the modeling of groundwater flow in coastal aquifers with analytic elements: (1) practical experience was obtained by constructing a groundwater model of the shallow aquifers below the Delmarva Peninsula USA using the commercial program MVAEM; ...

  3. Sensors for detecting analytes in fluids

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor); Severin, Erik (Inventor)

    1998-01-01

    Chemical sensors for detecting analytes in fluids comprise first and second conductive elements (e.g., electrical leads) electrically coupled to and separated by a chemically sensitive resistor which provides an electrical path between the conductive elements. The resistor comprises a plurality of alternating nonconductive regions (comprising a nonconductive organic polymer) and conductive regions (comprising a conductive material) transverse to the electrical path. The resistor provides a difference in resistance between the conductive elements when contacted with a fluid comprising a chemical analyte at a first concentration, than when contacted with a fluid comprising the chemical analyte at a second different concentration. Arrays of such sensors are constructed with at least two sensors having different chemically sensitive resistors providing dissimilar such differences in resistance. Variability in chemical sensitivity from sensor to sensor is provided by qualitatively or quantitatively varying the composition of the conductive and/or nonconductive regions. An electronic nose for detecting an analyte in a fluid may be constructed by using such arrays in conjunction with an electrical measuring device electrically connected to the conductive elements of each sensor.

  4. Sensors for detecting analytes in fluids

    NASA Technical Reports Server (NTRS)

    Severin, Erik (Inventor); Lewis, Nathan S. (Inventor)

    2001-01-01

    Chemical sensors for detecting analytes in fluids comprise first and second conductive elements (e.g., electrical leads) electrically coupled to and separated by a chemically sensitive resistor which provides an electrical path between the conductive elements. The resistor comprises a plurality of alternating nonconductive regions (comprising a nonconductive organic polymer) and conductive regions (comprising a conductive material) transverse to the electrical path. The resistor provides a difference in resistance between the conductive elements when contacted with a fluid comprising a chemical analyte at a first concentration, than when contacted with a fluid comprising the chemical analyte at a second different concentration. Arrays of such sensors are constructed with at least two sensors having different chemically sensitive resistors providing dissimilar such differences in resistance. Variability in chemical sensitivity from sensor to sensor is provided by qualitatively or quantitatively varying the composition of the conductive and/or nonconductive regions. An electronic nose for detecting an analyte in a fluid may be constructed by using such arrays in conjunction with an electrical measuring device electrically connected to the conductive elements of each sensor.

  5. Sensors for detecting analytes in fluids

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor); Severin, Erik (Inventor)

    1999-01-01

    Chemical sensors for detecting analytes in fluids comprise first and second conductive elements (e.g., electrical leads) electrically coupled to and separated by a chemically sensitive resistor which provides an electrical path between the conductive elements. The resistor comprises a plurality of alternating nonconductive regions (comprising a nonconductive organic polymer) and conductive regions (comprising a conductive material) transverse to the electrical path. The resistor provides a difference in resistance between the conductive elements when contacted with a fluid comprising a chemical analyte at a first concentration, than when contacted with a fluid comprising the chemical analyte at a second different concentration. Arrays of such sensors are constructed with at least two sensors having different chemically sensitive resistors providing dissimilar such differences in resistance. Variability in chemical sensitivity from sensor to sensor is provided by qualitatively or quantitatively varying the composition of the conductive and/or nonconductive regions. An electronic nose for detecting an analyte in a fluid may be constructed by using such arrays in conjunction with an electrical measuring device electrically connected to the conductive elements of each sensor.

  6. Sensor arrays for detecting analytes in fluids

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor); Freund, Michael S. (Inventor)

    1996-01-01

    Chemical sensors for detecting analytes in fluids comprise first and second conductive elements (e.g. electrical leads) electrically coupled to and separated by a chemically sensitive resistor which provides an electrical path between the conductive elements. The resistor comprises a plurality of alternating nonconductive regions (comprising a nonconductive organic polymer) and conductive regions (comprising a conductive material) transverse to the electrical path. The resistor provides a difference in resistance between the conductive elements when contacted with a fluid comprising a chemical analyte at a first concentration, than when contacted with a fluid comprising the chemical analyte at a second different concentration. Arrays of such sensors are constructed with at least two sensors having different chemically sensitive resistors providing dissimilar such differences in resistance. Variability in chemical sensitivity from sensor to sensor is provided by qualitatively or quantitatively varying the composition of the conductive and/or nonconductive regions. An electronic nose for detecting an analyte in a fluid may be constructed by using such arrays in conjunction with an electrical measuring device electrically connected to the conductive elements of each sensor.

  7. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  8. Evaluation of structural design concepts for an arrow-wing supersonic cruise aircraft

    NASA Technical Reports Server (NTRS)

    Sakata, I. F.; Davis, G. W.

    1977-01-01

    An analytical study was performed to determine the best structural approach for design of primary wing and fuselage structure of a Mach 2.7 arrow wing supersonic cruise aircraft. Concepts were evaluated considering near term start of design. Emphasis was placed on the complex interactions between thermal stress, static aeroelasticity, flutter, fatigue and fail safe design, static and dynamic loads, and the effects of variations in structural arrangements, concepts and materials on these interactions. Results indicate that a hybrid wing structure incorporating low profile convex beaded and honeycomb sandwich surface panels of titanium alloy 6Al-4V were the most efficient. The substructure includes titanium alloy spar caps reinforced with boron polyimide composites. The fuselage shell consists of hat stiffened skin and frame construction of titanium alloy 6Al-4V. A summary of the study effort is presented, and a discussion of the overall logic, design philosophy and interaction between the analytical methods for supersonic cruise aircraft design are included.

  9. A discrete event simulation model for evaluating the performances of an m/g/c/c state dependent queuing system.

    PubMed

    Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.

  10. Quantum spectral curve for arbitrary state/operator in AdS5/CFT4

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay; Kazakov, Vladimir; Leurent, Sébastien; Volin, Dmytro

    2015-09-01

    We give a derivation of quantum spectral curve (QSC) — a finite set of Riemann-Hilbert equations for exact spectrum of planar N=4 SYM theory proposed in our recent paper Phys. Rev. Lett. 112 (2014). We also generalize this construction to all local single trace operators of the theory, in contrast to the TBA-like approaches worked out only for a limited class of states. We reveal a rich algebraic and analytic structure of the QSC in terms of a so called Q-system — a finite set of Baxter-like Q-functions. This new point of view on the finite size spectral problem is shown to be completely compatible, though in a far from trivial way, with already known exact equations (analytic Y-system/TBA, or FiNLIE). We use the knowledge of this underlying Q-system to demonstrate how the classical finite gap solutions and the asymptotic Bethe ansatz emerge from our formalism in appropriate limits.

  11. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE PAGES

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...

    2018-03-28

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  12. The Happy Culture: A Theoretical, Meta-Analytic, and Empirical Review of the Relationship Between Culture and Wealth and Subjective Well-Being

    PubMed Central

    Steel, Piers; Taras, Vasyl; Uggerslev, Krista; Bosco, Frank

    2017-01-01

    Do cultural values enhance financial and subjective well-being (SWB)? Taking a multidisciplinary approach, we meta-analytically reviewed the field, found it thinly covered, and focused on individualism. In counter, we collected a broad array of individual-level data, specifically an Internet sample of 8,438 adult respondents. Individual SWB was most strongly associated with cultural values that foster relationships and social capital, which typically accounted for more unique variance in life satisfaction than an individual’s salary. At a national level, we used mean-based meta-analysis to construct a comprehensive cultural and SWB database. Results show some reversals from the individual level, particularly masculinity’s facet of achievement orientation. In all, the happy nation has low power distance and low uncertainty avoidance, but is high in femininity and individualism, and these effects are interrelated but still partially independent from political and economic institutions. In short, culture matters for individual and national well-being. PMID:28770649

  13. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  14. Biosensors and their applications in detection of organophosphorus pesticides in the environment.

    PubMed

    Hassani, Shokoufeh; Momtaz, Saeideh; Vakhshiteh, Faezeh; Maghsoudi, Armin Salek; Ganjali, Mohammad Reza; Norouzi, Parviz; Abdollahi, Mohammad

    2017-01-01

    This review discusses the past and recent advancements of biosensors focusing on detection of organophosphorus pesticides (OPs) due to their exceptional use during the last decades. Apart from agricultural benefits, OPs also impose adverse toxicological effects on animal and human population. Conventional approaches such as chromatographic techniques used for pesticide detection are associated with several limitations. A biosensor technology is unique due to the detection sensitivity, selectivity, remarkable performance capabilities, simplicity and on-site operation, fabrication and incorporation with nanomaterials. This study also provided specifications of the most OPs biosensors reported until today based on their transducer system. In addition, we highlighted the application of advanced complementary materials and analysis techniques in OPs detection systems. The availability of these new materials associated with new sensing techniques has led to introduction of easy-to-use analytical tools of high sensitivity and specificity in the design and construction of OPs biosensors. In this review, we elaborated the achievements in sensing systems concerning innovative nanomaterials and analytical techniques with emphasis on OPs.

  15. The Happy Culture: A Theoretical, Meta-Analytic, and Empirical Review of the Relationship Between Culture and Wealth and Subjective Well-Being.

    PubMed

    Steel, Piers; Taras, Vasyl; Uggerslev, Krista; Bosco, Frank

    2018-05-01

    Do cultural values enhance financial and subjective well-being (SWB)? Taking a multidisciplinary approach, we meta-analytically reviewed the field, found it thinly covered, and focused on individualism. In counter, we collected a broad array of individual-level data, specifically an Internet sample of 8,438 adult respondents. Individual SWB was most strongly associated with cultural values that foster relationships and social capital, which typically accounted for more unique variance in life satisfaction than an individual's salary. At a national level, we used mean-based meta-analysis to construct a comprehensive cultural and SWB database. Results show some reversals from the individual level, particularly masculinity's facet of achievement orientation. In all, the happy nation has low power distance and low uncertainty avoidance, but is high in femininity and individualism, and these effects are interrelated but still partially independent from political and economic institutions. In short, culture matters for individual and national well-being.

  16. Ambient aerodynamic ionization source for remote analyte sampling and mass spectrometric analysis.

    PubMed

    Dixon, R Brent; Sampson, Jason S; Hawkridge, Adam M; Muddiman, David C

    2008-07-01

    The use of aerodynamic devices in ambient ionization source development has become increasingly prevalent in the field of mass spectrometry. In this study, an air ejector has been constructed from inexpensive, commercially available components to incorporate an electrospray ionization emitter within the exhaust jet of the device. This novel aerodynamic device, herein termed remote analyte sampling, transport, and ionization relay (RASTIR) was used to remotely sample neutral species in the ambient and entrain them into an electrospray plume where they were subsequently ionized and detected using a linear ion trap Fourier transform mass spectrometer. Two sets of experiments were performed in the ambient environment to demonstrate the device's utility. The first involved the remote (approximately 1 ft) vacuum collection of pure sample particulates (i.e., dry powder) from a glass slide, entrainment and ionization at the ESI emitter, and mass spectrometric detection. The second experiment involved the capture (vacuum collection) of matrix-assisted laser desorbed proteins followed by entrainment in the ESI emitter plume, multiple charging, and mass spectrometric detection. This approach is in principle a RASTIR-assisted matrix-assisted laser desorption electrospray ionization source (Sampson, J. S.; Hawkridge, A. M.; Muddiman, D. C. J. Am. Soc. Mass Spectrom. 2006, 17, 1712-1716; Rapid Commun. Mass Spectrom. 2007, 21, 1150-1154.). A detailed description of the device construction, operational parameters, and preliminary small molecule and protein data are presented.

  17. Strategy for design NIR calibration sets based on process spectrum and model space: An innovative approach for process analytical technology.

    PubMed

    Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M

    2015-10-10

    The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Modified locally weighted--partial least squares regression improving clinical predictions from infrared spectra of human serum samples.

    PubMed

    Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la

    2013-03-30

    Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Impact of uncertainty in expected return estimation on stock price volatility

    NASA Astrophysics Data System (ADS)

    Kostanjcar, Zvonko; Jeren, Branko; Juretic, Zeljan

    2012-11-01

    We investigate the origin of volatility in financial markets by defining an analytical model for time evolution of stock share prices. The defined model is similar to the GARCH class of models, but can additionally exhibit bimodal behaviour in the supply-demand structure of the market. Moreover, it differs from existing Ising-type models. It turns out that the constructed model is a solution of a thermodynamic limit of a Gibbs probability measure when the number of traders and the number of stock shares approaches infinity. The energy functional of the Gibbs probability measure is derived from the Nash equilibrium of the underlying game.

  20. A Genetically Modified Tobacco Mosaic Virus that can Produce Gold Nanoparticles from a Metal Salt Precursor

    PubMed Central

    Love, Andrew J.; Makarov, Valentine V.; Sinitsyna, Olga V.; Shaw, Jane; Yaminsky, Igor V.; Kalinina, Natalia O.; Taliansky, Michael E.

    2015-01-01

    We genetically modified tobacco mosaic virus (TMV) to surface display a characterized peptide with potent metal ion binding and reducing capacity (MBP TMV), and demonstrate that unlike wild type TMV, this construct can lead to the formation of discrete 10–40 nm gold nanoparticles when mixed with 3 mM potassium tetrachloroaurate. Using a variety of analytical physicochemical approaches it was found that these nanoparticles were crystalline in nature and stable. Given that the MBP TMV can produce metal nanomaterials in the absence of chemical reductants, it may have utility in the green production of metal nanomaterials. PMID:26617624

  1. Power-law scaling of extreme dynamics near higher-order exceptional points

    NASA Astrophysics Data System (ADS)

    Zhong, Q.; Christodoulides, D. N.; Khajavikhan, M.; Makris, K. G.; El-Ganainy, R.

    2018-02-01

    We investigate the extreme dynamics of non-Hermitian systems near higher-order exceptional points in photonic networks constructed using the bosonic algebra method. We show that strong power oscillations for certain initial conditions can occur as a result of the peculiar eigenspace geometry and its dimensionality collapse near these singularities. By using complementary numerical and analytical approaches, we show that, in the parity-time (PT ) phase near exceptional points, the logarithm of the maximum optical power amplification scales linearly with the order of the exceptional point. We focus in our discussion on photonic systems, but we note that our results apply to other physical systems as well.

  2. A meshless method using radial basis functions for numerical solution of the two-dimensional KdV-Burgers equation

    NASA Astrophysics Data System (ADS)

    Zabihi, F.; Saffarian, M.

    2016-07-01

    The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.

  3. Integrable models of quantum optics

    NASA Astrophysics Data System (ADS)

    Yudson, Vladimir; Makarov, Aleksander

    2017-10-01

    We give an overview of exactly solvable many-body models of quantum optics. Among them is a system of two-level atoms which interact with photons propagating in a one-dimensional (1D) chiral waveguide; exact eigenstates of this system can be explicitly constructed. This approach is used also for a system of closely located atoms in the usual (non-chiral) waveguide or in 3D space. Moreover, it is shown that for an arbitrary atomic system with a cascade spontaneous radiative decay, the fluorescence spectrum can be described by an exact analytic expression which accounts for interference of emitted photons. Open questions related with broken integrability are discussed.

  4. On metric structure of ultrametric spaces

    NASA Astrophysics Data System (ADS)

    Nechaev, S. K.; Vasilyev, O. A.

    2004-03-01

    In our work we have reconsidered the old problem of diffusion at the boundary of an ultrametric tree from a 'number theoretic' point of view. Namely, we use the modular functions (in particular, the Dedekind eegr-function) to construct the 'continuous' analogue of the Cayley tree isometrically embedded in the Poincaré upper half-plane. Later we work with this continuous Cayley tree as with a standard function of a complex variable. In the framework of our approach, the results of Ogielsky and Stein on dynamics in ultrametric spaces are reproduced semi-analytically or semi-numerically. The speculation on the new 'geometrical' interpretation of replica n rarr 0 limit is proposed.

  5. Direct Quantum Dynamics Using Grid-Based Wave Function Propagation and Machine-Learned Potential Energy Surfaces.

    PubMed

    Richings, Gareth W; Habershon, Scott

    2017-09-12

    We describe a method for performing nuclear quantum dynamics calculations using standard, grid-based algorithms, including the multiconfiguration time-dependent Hartree (MCTDH) method, where the potential energy surface (PES) is calculated "on-the-fly". The method of Gaussian process regression (GPR) is used to construct a global representation of the PES using values of the energy at points distributed in molecular configuration space during the course of the wavepacket propagation. We demonstrate this direct dynamics approach for both an analytical PES function describing 3-dimensional proton transfer dynamics in malonaldehyde and for 2- and 6-dimensional quantum dynamics simulations of proton transfer in salicylaldimine. In the case of salicylaldimine we also perform calculations in which the PES is constructed using Hartree-Fock calculations through an interface to an ab initio electronic structure code. In all cases, the results of the quantum dynamics simulations are in excellent agreement with previous simulations of both systems yet do not require prior fitting of a PES at any stage. Our approach (implemented in a development version of the Quantics package) opens a route to performing accurate quantum dynamics simulations via wave function propagation of many-dimensional molecular systems in a direct and efficient manner.

  6. [Socio-ecological super-determination of health in rural areas in Humaitá, State of Amazonas, Brazil].

    PubMed

    Schütz, Gabriel Eduardo; Mello, Marcia Gomide da Silva; de Carvalho, Marcia Aparecida Ribeiro; Câmara, Volney de Magalhães

    2014-10-01

    The scope of this article is to apply a trans-disciplinary socio-ecological approach to discuss the super-determination of health in rural areas of the southern Amazon region from a case study developed in Humaitá in the State of Amazonas in Brazil. Field data were collected using ethnographic techniques applied during three expeditions in Humaitá's rural area between 2012 and 2014. Based on the 'socio-ecological metabolism' analytical category, a descriptive and theoretical analysis of four crucial components in the process of super-determination of local health are presented: (1) the composition of the local rural population; (2) fixed and changing territorial aspects; (3) construction of socio-ecological identities; (4) ethnic conflict between Indians and non-Indians. The conclusion reached is that the incorporation of a socio-ecological approach in territorial-based health research provides input for analyses of the local health situation through the systematization of information related to the process of super-determination of health. It also helps in the construction of trans-disciplinarity, which is a necessary epistemological condition for addressing the complex reality at the interfaces of social production, the environment and health.

  7. Merging Geometric Documentation with Materials Characterization and Analysis of the History of the Holy Aedicule in the Church of the Holy Sepulchre in Jerusalem

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Lambrou, E.; Pantazis, G.; Agrafiotis, P.; Papadaki, A.; Kotoula, L.; Lampropoulos, K.; Delegou, E.; Apostolopoulou, M.; Alexakis, M.; Moropoulou, A.

    2017-05-01

    The National Technical University of Athens undertook the compilation of an "Integrated Diagnostic Research Project and Strategic Planning for Materials, Interventions Conservation and Rehabilitation of the Holy Aedicule of the Church of the Holy Sepulchre in Jerusalem". This paper focuses on the work merging the geometric documentation with the characterization of materials, the identification of building phases and the diagnosis of decay and pathology through the use of analytical and non-destructive techniques. Through this integrated approach, i.e. through the documentation and characterization of the building materials, through the diagnosis of decay and pathology, through the accurate geometric documentation of the building and through the non-destructive prospection of its internal structure, it was feasible to identify the construction phases of the Holy Aedicule, identifying the remnants of the preserved earlier constructions and the original monolithic Tomb. This work, thus, demonstrates that the adoption of an interdisciplinary approach for integrated documentation is a powerful tool for a better understanding of monuments, both in terms of its structural integrity, as well as in terms of its state of preservation, both prerequisites for effective rehabilitation.

  8. Authentic Oral Language Production and Interaction in CALL: An Evolving Conceptual Framework for the Use of Learning Analytics within the SpeakApps Project

    ERIC Educational Resources Information Center

    Nic Giolla Mhichíl, Mairéad; van Engen, Jeroen; Ó Ciardúbháin, Colm; Ó Cléircín, Gearóid; Appel, Christine

    2014-01-01

    This paper sets out to construct and present the evolving conceptual framework of the SpeakApps projects to consider the application of learning analytics to facilitate synchronous and asynchronous oral language skills within this CALL context. Drawing from both the CALL and wider theoretical and empirical literature of learner analytics, the…

  9. Summoning Spectres: Crises and Their Construction

    ERIC Educational Resources Information Center

    Clarke, John; Newman, Janet

    2010-01-01

    The construction of crises is a key analytical and political issue. This paper examines what is at stake in the processes and practices of construction, responding to the arguments made in Andrew Gamble's "The spectres at the feast" (2009). We suggest that there are three areas of critical concern: first, that too little attention has…

  10. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  11. On the Construction and Dynamics of Knotted Fields

    NASA Astrophysics Data System (ADS)

    Kedia, Hridesh

    Representing a physical field in terms of its field lines has often enabled a deeper understanding of complex physical phenomena, from Faraday's law of magnetic induction, to the Helmholtz laws of vortex motion, to the free energy density of liquid crystals in terms of the distortions of the lines of the director field. At the same time, the application of ideas from topology--the study of properties that are invariant under continuous deformations--has led to robust insights into the nature of complex physical systems from defects in crystal structures, to the earth's magnetic field, to topological conservation laws. The study of knotted fields, physical fields in which the field lines encode knots, emerges naturally from the application of topological ideas to the investigation of the physical phenomena best understood in terms of the lines of a field. A knot--a closed loop tangled with itself which can not be untangled without cutting the loop--is the simplest topologically non-trivial object constructed from a line. Remarkably, knots in the vortex (magnetic field) lines of a dissipationless fluid (plasma), persist forever as they are transported by the flow, stretching and rotating as they evolve. Moreover, deeply entwined with the topology-preserving dynamics of dissipationless fluids and plasmas, is an additional conserved quantity--helicity, a measure of the average linking of the vortex (magnetic field) lines in a fluid (plasma)--which has had far-reaching consequences for fluids and plasmas. Inspired by the persistence of knots in dissipationless flows, and their far-reaching physical consequences, we seek to understand the interplay between the dynamics of a field and the topology of its field lines in a variety of systems. While it is easy to tie a knot in a shoelace, tying a knot in the the lines of a space-filling field requires contorting the lines everywhere to match the knotted region. The challenge of analytically constructing knotted field configurations has impeded a deeper understanding of the interplay between topology and dynamics in fluids and plasmas. We begin by analytically constructing knotted field configurations which encode a desired knot in the lines of the field, and show that their helicity can be tuned independently of the encoded knot. The nonlinear nature of the physical systems in which these knotted field configurations arise, makes their analytical study challenging. We ask if a linear theory such as electromagnetism can allow knotted field configurations to persist with time. We find analytical expressions for an infinite family of knotted solutions to Maxwell's equations in vacuum and elucidate their connections to dissipationless flows. We present a design rule for constructing such persistently knotted electromagnetic fields, which could possibly be used to transfer knottedness to matter such as quantum fluids and plasmas. An important consequence of the persistence of knots in classical dissipationless flows is the existence of an additional conserved quantity, helicity, which has had far-reaching implications. To understand the existence of analogous conserved quantities, we ask if superfluids, which flow without dissipation just like classical dissipationless flows, have an additional conserved quantity akin to helicity. We address this question using an analytical approach based on defining the particle relabeling symmetry--the symmetry underlying helicity conservation--in superfluids, and find that an analogous conserved quantity exists but vanishes identically owing to the intrinsic geometry of complex scalar fields. Furthermore, to address the question of a ``classical limit'' of superfluid vortices which recovers classical helicity conservation, we perform numerical simulations of \\emph{bundles} of superfluid vortices, and find behavior akin to classical viscous flows.

  12. On pseudo-hyperkähler prepotentials

    NASA Astrophysics Data System (ADS)

    Devchand, Chandrashekar; Spiro, Andrea

    2016-10-01

    An explicit surjection from a set of (locally defined) unconstrained holomorphic functions on a certain submanifold of Sp1(ℂ) × ℂ4n onto the set HKp,q of local isometry classes of real analytic pseudo-hyperkähler metrics of signature (4p, 4q) in dimension 4n is constructed. The holomorphic functions, called prepotentials, are analogues of Kähler potentials for Kähler metrics and provide a complete parameterisation of HKp,q. In particular, there exists a bijection between HKp,q and the set of equivalence classes of prepotentials. This affords the explicit construction of pseudo-hyperkähler metrics from specified prepotentials. The construction generalises one due to Galperin, Ivanov, Ogievetsky, and Sokatchev. Their work is given a coordinate-free formulation and complete, self-contained proofs are provided. The Appendix provides a vital tool for this construction: a reformulation of real analytic G-structures in terms of holomorphic frame fields on complex manifolds.

  13. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D.; Godtliebsen, F.; Rue, H.

    2012-04-01

    Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.

  14. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models

    PubMed Central

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005

  15. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    PubMed

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  16. Cross-Disciplinary Consultancy to Bridge Public Health Technical Needs and Analytic Developers: Asyndromic Surveillance Use Case

    PubMed Central

    Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura

    2015-01-01

    Introduction: We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists’ use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Materials and Methods: Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy’s focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Results: Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. Practice Implications: A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead. PMID:26834939

  17. Cross-Disciplinary Consultancy to Bridge Public Health Technical Needs and Analytic Developers: Asyndromic Surveillance Use Case.

    PubMed

    Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura; Burkom, Howard

    2015-01-01

    We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists' use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy's focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead.

  18. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases

    PubMed Central

    Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-01-01

    Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757

  19. Using Multiple Lenses to Examine the Development of Beginning Biology Teachers' Pedagogical Content Knowledge for Teaching Natural Selection Simulations

    NASA Astrophysics Data System (ADS)

    Sickel, Aaron J.; Friedrichsen, Patricia

    2018-02-01

    Pedagogical content knowledge (PCK) has become a useful construct to examine science teacher learning. Yet, researchers conceptualize PCK development in different ways. The purpose of this longitudinal study was to use three analytic lenses to understand the development of three beginning biology teachers' PCK for teaching natural selection simulations. We observed three early-career biology teachers as they taught natural selection in their respective school contexts over two consecutive years. Data consisted of six interviews with each participant. Using the PCK model developed by Magnusson et al. (1999), we examined topic-specific PCK development utilizing three different lenses: (1) expansion of knowledge within an individual knowledge base, (2) integration of knowledge across knowledge bases, and (3) knowledge that explicitly addressed core concepts of natural selection. We found commonalities across the participants, yet each lens was also useful to understand the influence of different factors (e.g., orientation, subject matter preparation, and the idiosyncratic nature of teacher knowledge) on PCK development. This multi-angle approach provides implications for considering the quality of beginning science teachers' knowledge and future research on PCK development. We conclude with an argument that explicitly communicating lenses used to understand PCK development will help the research community compare analytic approaches and better understand the nature of science teacher learning.

  20. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  1. A coordinated set of ecosystem research platforms open to international research in ecotoxicology, AnaEE-France.

    PubMed

    Mougin, Christian; Azam, Didier; Caquet, Thierry; Cheviron, Nathalie; Dequiedt, Samuel; Le Galliard, Jean-François; Guillaume, Olivier; Houot, Sabine; Lacroix, Gérard; Lafolie, François; Maron, Pierre-Alain; Michniewicz, Radika; Pichot, Christian; Ranjard, Lionel; Roy, Jacques; Zeller, Bernd; Clobert, Jean; Chanzy, André

    2015-10-01

    The infrastructure for Analysis and Experimentation on Ecosystems (AnaEE-France) is an integrated network of the major French experimental, analytical, and modeling platforms dedicated to the biological study of continental ecosystems (aquatic and terrestrial). This infrastructure aims at understanding and predicting ecosystem dynamics under global change. AnaEE-France comprises complementary nodes offering access to the best experimental facilities and associated biological resources and data: Ecotrons, seminatural experimental platforms to manipulate terrestrial and aquatic ecosystems, in natura sites equipped for large-scale and long-term experiments. AnaEE-France also provides shared instruments and analytical platforms dedicated to environmental (micro) biology. Finally, AnaEE-France provides users with data bases and modeling tools designed to represent ecosystem dynamics and to go further in coupling ecological, agronomical, and evolutionary approaches. In particular, AnaEE-France offers adequate services to tackle the new challenges of research in ecotoxicology, positioning its various types of platforms in an ecologically advanced ecotoxicology approach. AnaEE-France is a leading international infrastructure, and it is pioneering the construction of AnaEE (Europe) infrastructure in the field of ecosystem research. AnaEE-France infrastructure is already open to the international community of scientists in the field of continental ecotoxicology.

  2. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  3. Two-dimensional fingerprinting approach for comparison of complex substances analysed by HPLC-UV and fluorescence detection.

    PubMed

    Ni, Yongnian; Liu, Ying; Kokot, Serge

    2011-02-07

    This work is concerned with the research and development of methodology for analysis of complex mixtures such as pharmaceutical or food samples, which contain many analytes. Variously treated samples (swill washed, fried and scorched) of the Rhizoma atractylodis macrocephalae (RAM) traditional Chinese medicine (TCM) as well as the common substitute, Rhizoma atractylodis (RA) TCM were chosen as examples for analysis. A combined data matrix of chromatographic 2-D HPLC-DAD-FLD (two-dimensional high performance liquid chromatography with diode array and fluorescence detectors) fingerprint profiles was constructed with the use of the HPLC-DAD and HPLC-FLD individual data matrices; the purpose was to collect maximum information and to interpret this complex data with the use of various chemometrics methods e.g. the rank-ordering multi-criteria decision making (MCDM) PROMETHEE and GAIA, K-nearest neighbours (KNN), partial least squares (PLS), back propagation-artificial neural networks (BP-ANN) methods. The chemometrics analysis demonstrated that the combined 2-D HPLC-DAD-FLD data matrix does indeed provide more information and facilitates better performing classification/prediction models for the analysis of such complex samples as the RAM and RA ones noted above. It is suggested that this fingerprint approach is suitable for analysis of other complex, multi-analyte substances.

  4. Strategy for Improved Representation of Magnetospheric Electric Potential Structure on a Polar-Capped Ionosphere

    NASA Astrophysics Data System (ADS)

    Schulz, M.

    2016-12-01

    In some simple models of magnetospheric electrodynamics [e.g., Volland, Ann. Géophys., 31, 159-173, 1975], the normal component of the convection electric field is discontinuous across the boundary between closed and open magnetic field lines, and this discontinuity facilitates the formation of auroral arcs there. The requisite discontinuity in E is achieved by making the scalar potential proportional to a positive power (typically 1 or 2) of L on closed field lines and to a negative power (typically -1/2) of L on open (i.e., polar-cap) field lines. This suggests that it may be advantageous to construct more realistic (and thus more complicated) empirical magnetospheric and ionospheric electric-field models from superpositions of mutually orthogonal (or not) vector basis functions having this same analytical property (i.e., discontinuity at L = L*, the boundary surface between closed and open magnetic field lines). The present work offers a few examples of such constructions. A major challenge in this project has been to devise a coordinate system that simplifies the required analytical expansions of electric scalar potentials and accommodates the anti-sunward offset of each polar-cap boundary's centroid with respect to the corresponding magnetic pole. For circular northern and southern polar caps containing equal amounts of magnetic flux, one can imagine a geometrical construction of nested circular (but non-concentric) contours of constant quasi-latitude whose centers converge toward the magnetic poles as the contours themselves approach the magnetic equator. For more general polar-cap shapes and (in any case) to assure mutual orthogonality of respective coordinate surfaces on a spherical ionosphere, a formulation based on harmonic coordinates (expanded from eigen-solutions of the two-dimensional Laplace equation) may be preferable.

  5. Analytical Sociology: A Bungean Appreciation

    ERIC Educational Resources Information Center

    Wan, Poe Yu-ze

    2012-01-01

    Analytical sociology, an intellectual project that has garnered considerable attention across a variety of disciplines in recent years, aims to explain complex social processes by dissecting them, accentuating their most important constituent parts, and constructing appropriate models to understand the emergence of what is observed. To achieve…

  6. Reverie and metaphor. Some thoughts on how I work as a psychoanalyst.

    PubMed

    Ogden, T

    1997-08-01

    In this paper, the author presents parts of an ongoing internal dialogue concerning how he works as an analyst. He describes the way in which he attempts to sense what is most alive and most real in each analytic encounter, as well as his use of his own reveries in his effort to locate himself in what is going on at an unconscious level in the analytic relationship. The author views each analytic situation as reflecting, to a large degree, a specific type of unconscious intersubjective construction. Since unconscious experience is by definition outside of conscious awareness, the analyst must make use of indirect (associational) methods such as the scrutiny of his own reverie experience in his efforts to 'catch the drift' (Freud, 1923, p. 239) of the unconscious intersubjective constructions being generated. Reveries (and all other derivatives of the unconscious) are viewed not as glimpses into the unconscious, but as metaphorical expressions of what the unconscious experience is like. In the author's experience, when an analysis is 'a going concern', the analytic dialogue often takes the form of a verbal 'squiggle game' (Winnicott, 1971a, p. 3) in which the analytic pair elaborates and modifies the metaphors that the other has unself-consciously introduced. The analytic use of reverie and of the role of metaphor in the analytic experience is clinically illustrated.

  7. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  8. Programming chemistry in DNA-addressable bioreactors.

    PubMed

    Fellermann, Harold; Cardelli, Luca

    2014-10-06

    We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. A data-analytic strategy for protein biomarker discovery: profiling of high-dimensional proteomic data for cancer detection.

    PubMed

    Yasui, Yutaka; Pepe, Margaret; Thompson, Mary Lou; Adam, Bao-Ling; Wright, George L; Qu, Yinsheng; Potter, John D; Winget, Marcy; Thornquist, Mark; Feng, Ziding

    2003-07-01

    With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of 'signature' protein profiles specific to each pathologic state (e.g. normal vs. cancer) or differential profiles between experimental conditions (e.g. treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data-analytic strategy for discovering protein biomarkers based on such high-dimensional mass spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data-analytic strategy takes properties of the SELDI mass spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After this pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.

  10. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  11. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  12. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries

    PubMed Central

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-01-01

    Background Individual and organizational factors are the factors influencing traumatic occupational injuries. Objectives The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. Materials and Methods The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. Results The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries’ severity (P < 0.05). Conclusions Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents’ severity in large construction industries. PMID:27800465

  13. The mechanism distinguishability problem in biochemical kinetics: the single-enzyme, single-substrate reaction as a case study.

    PubMed

    Schnell, Santiago; Chappell, Michael J; Evans, Neil D; Roussel, Marc R

    2006-01-01

    A theoretical analysis of the distinguishability problem of two rival models of the single enzyme-single substrate reaction, the Michaelis-Menten and Henri mechanisms, is presented. We also outline a general approach for analysing the structural indistinguishability between two mechanisms. The approach involves constructing, if possible, a smooth mapping between the two candidate models. Evans et al. [N.D. Evans, M.J. Chappell, M.J. Chapman, K.R. Godfrey, Structural indistinguishability between uncontrolled (autonomous) nonlinear analytic systems, Automatica 40 (2004) 1947-1953] have shown that if, in addition, either of the mechanisms satisfies a particular criterion then such a transformation always exists when the models are indistinguishable from their experimentally observable outputs. The approach is applied to the single enzyme-single substrate reaction mechanism. In principle, mechanisms can be distinguished using this analysis, but we show that our ability to distinguish mechanistic models depends both on the precise measurements made, and on our knowledge of the system prior to performing the kinetics experiments.

  14. Integration of QFD, AHP, and LPP methods in supplier development problems under uncertainty

    NASA Astrophysics Data System (ADS)

    Shad, Zahra; Roghanian, Emad; Mojibian, Fatemeh

    2014-04-01

    Quality function deployment (QFD) is a customer-driven approach, widely used to develop or process new product to maximize customer satisfaction. Last researches used linear physical programming (LPP) procedure to optimize QFD; however, QFD issue involved uncertainties, or fuzziness, which requires taking them into account for more realistic study. In this paper, a set of fuzzy data is used to address linguistic values parameterized by triangular fuzzy numbers. Proposed integrated approach including analytic hierarchy process (AHP), QFD, and LPP to maximize overall customer satisfaction under uncertain conditions and apply them in the supplier development problem. The fuzzy AHP approach is adopted as a powerful method to obtain the relationship between the customer requirements and engineering characteristics (ECs) to construct house of quality in QFD method. LPP is used to obtain the optimal achievement level of the ECs and subsequently the customer satisfaction level under different degrees of uncertainty. The effectiveness of proposed method will be illustrated by an example.

  15. Symplectic approach to calculation of magnetic field line trajectories in physical space with realistic magnetic geometry in divertor tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Punjabi, Alkesh; Ali, Halima

    A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates ({psi},{theta}) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. {psi} is the toroidal magnetic flux and {theta} is the poloidal angle. Natural canonical coordinates ({psi},{theta},{phi}) can be transformed to physical position (R,Z,{phi}) using a canonical transformation. (R,Z,{phi}) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonicalmore » coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.« less

  16. Symplectic approach to calculation of magnetic field line trajectories in physical space with realistic magnetic geometry in divertor tokamaks

    NASA Astrophysics Data System (ADS)

    Punjabi, Alkesh; Ali, Halima

    2008-12-01

    A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safari, L., E-mail: laleh.safari@ist.ac.at; Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu; Santos, J. P.

    Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.

  18. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  19. On Establishing Big Data Wave Breakwaters with Analytics (Invited)

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2013-12-01

    The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.

  20. Development of balanced key performance indicators for emergency departments strategic dashboards following analytic hierarchical process.

    PubMed

    Safdari, Reza; Ghazisaeedi, Marjan; Mirzaee, Mahboobeh; Farzi, Jebrail; Goodini, Azadeh

    2014-01-01

    Dynamic reporting tools, such as dashboards, should be developed to measure emergency department (ED) performance. However, choosing an effective balanced set of performance measures and key performance indicators (KPIs) is a main challenge to accomplish this. The aim of this study was to develop a balanced set of KPIs for use in ED strategic dashboards following an analytic hierarchical process. The study was carried out in 2 phases: constructing ED performance measures based on balanced scorecard perspectives and incorporating them into analytic hierarchical process framework to select the final KPIs. The respondents placed most importance on ED internal processes perspective especially on measures related to timeliness and accessibility of care in ED. Some measures from financial, customer, and learning and growth perspectives were also selected as other top KPIs. Measures of care effectiveness and care safety were placed as the next priorities too. The respondents placed least importance on disease-/condition-specific "time to" measures. The methodology can be presented as a reference model for development of KPIs in various performance related areas based on a consistent and fair approach. Dashboards that are designed based on such a balanced set of KPIs will help to establish comprehensive performance measurements and fair benchmarks and comparisons.

  1. The path dependency theory: analytical framework to study institutional integration. The case of France

    PubMed Central

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-01-01

    Background The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. Purpose PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. Methods A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Results Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Conclusion Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France. PMID:20689740

  2. A fully analytic treatment of resonant inductive coupling in the far field

    NASA Astrophysics Data System (ADS)

    Sedwick, Raymond J.

    2012-02-01

    For the application of resonant inductive coupling for wireless power transfer, fabrication of flat spiral coils using ribbon wire allows for analytic expressions of the capacitance and inductance of the coils and therefore the resonant frequency. The expressions can also be used in an approximate way for the analysis of coils constructed from cylindrical wire. Ribbon wire constructed from both standard metals as well as high temperature superconducting material is commercially available, so using these derived expressions as a basis, a fully analytic treatment is presented that allows for design trades to be made for hybrid designs incorporating either technology. The model is then extended to analyze the performance of the technology as applied to inductively coupled communications, which has been demonstrated as having an advantage in circumstances where radiated signals would suffer unacceptable levels of attenuation.

  3. Understanding the P×S Aspect of Within-Person Variation: A Variance Partitioning Approach

    PubMed Central

    Lakey, Brian

    2016-01-01

    This article reviews a variance partitioning approach to within-person variation based on Generalizability Theory and the Social Relations Model. The approach conceptualizes an important part of within-person variation as Person × Situation (P×S) interactions: differences among persons in their profiles of responses across the same situations. The approach provided the first quantitative method for capturing within-person variation and demonstrated very large P×S effects for a wide range of constructs. These include anxiety, five-factor personality traits, perceived social support, leadership, and task performance. Although P×S effects are commonly very large, conceptual, and analytic obstacles have thwarted consistent progress. For example, how does one develop a psychological, versus purely statistical, understanding of P×S effects? How does one forecast future behavior when the criterion is a P×S effect? How can understanding P×S effects contribute to psychological theory? This review describes potential solutions to these and other problems developed in the course of conducting research on the P×S aspect of social support. Additional problems that need resolution are identified. PMID:26858661

  4. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  5. Ranking the effects of urban development projects on social determinants of health: health impact assessment.

    PubMed

    Shojaei, Parisa; Karimlou, Masoud; Nouri, Jafar; Mohammadi, Farahnaz; Malek Afzali, Hosein; Forouzan, Ameneh Setareh

    2014-05-30

    Health impact assessment (HIA) offer a very logical and interesting approach for those aiming to integrate health issues into planning processes. With a lot of works and plans waiting to be done (e.g., developing and updating plans, counseling planning commissions, cooperation with other organizations), planners find it difficult to prioritize health among a variety of possible issues and solutions they confront. In the present article, first, the list of social determinants of health associated with Chitgar man-made lake was extracted out using a qualitative method and with content analysis approach, and then they were prioritized using analytic hierarchy process. 28 social determinants of health including "intermediary" and "structural" determinants were extracted out. Regarding positive effects of lake on these determinants, "recreational services" and "traffic" received the highest and the lowest weights with 0.895 and 0.638 respectively among structural determinants and with consideration to "construction" option. Furthermore, among intermediary determinants for "construction" option, sub-criteria of both "physical activity" and "air quality" received the final highest weight (0.889) and "pathogenesis" indicated the lowest weight with 0.617. Moreover, lake demonstrated the highest negative effects on "housing" among "structural" determinants which it takes the highest weight (0.476) in "non-construction" option. Additionally, lake had the highest negative effects on "noise pollution" among "intermediary determinants" and it takes the highest weight (0.467) in "non-construction" option. It has been shown that urban development projects such as green spaces, man-made lakes … have a huge range of effects on community's health, and having not considered these effects by urban planners and mangers is going to confront urban health with many challenges.

  6. Distribution factors for construction loads and girder capacity equations [project summary].

    DOT National Transportation Integrated Search

    2017-03-01

    This project focused on the use of Florida I-beams (FIBs) in bridge construction. University of Florida researchers used analytical models and finite element analysis to update equations used in the design of bridges using FIBs. They were particularl...

  7. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  8. [Argumentation and construction of validity in Carlos Matus' situational strategic planning].

    PubMed

    Rivera, Francisco Javier Uribe

    2011-09-01

    This study analyzes the process of producing a situational plan according to a benchmark from the philosophy of language and argumentation theory. The basic approach used in the analysis was developed by Carlos Matus. Specifically, the study seeks to identify the inherent argumentative structure and patterns in the situational explanation and regulatory design in a plan's operations, taking argumentative approaches from pragma-dialectics and informal logic as the analytical parameters. The explanation of a health problem is used to illustrate the study. Methodologically, the study is based on the existing literature on the subject and case analyses. The study concludes with the proposition that the use of the specific references means introducing greater rigor into both the analysis of the validity of causal arguments and the design of proposals for interventions, in order for them to be more conclusive in achieving a plan's objectives.

  9. Nonlinear relativistic plasma resonance: Renormalization group approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metelskii, I. I., E-mail: metelski@lebedev.ru; Kovalev, V. F., E-mail: vfkvvfkv@gmail.com; Bychenkov, V. Yu., E-mail: bychenk@lebedev.ru

    An analytical solution to the nonlinear set of equations describing the electron dynamics and electric field structure in the vicinity of the critical density in a nonuniform plasma is constructed using the renormalization group approach with allowance for relativistic effects of electron motion. It is demonstrated that the obtained solution describes two regimes of plasma oscillations in the vicinity of the plasma resonance— stationary and nonstationary. For the stationary regime, the spatiotemporal and spectral characteristics of the resonantly enhanced electric field are investigated in detail and the effect of the relativistic nonlinearity on the spatial localization of the energy ofmore » the plasma relativistic field is considered. The applicability limits of the obtained solution, which are determined by the conditions of plasma wave breaking in the vicinity of the resonance, are established and analyzed in detail for typical laser and plasma parameters. The applicability limits of the earlier developed nonrelativistic theories are refined.« less

  10. Numerical solution of open string field theory in Schnabl gauge

    NASA Astrophysics Data System (ADS)

    Arroyo, E. Aldo; Fernandes-Silva, A.; Szitas, R.

    2018-01-01

    Using traditional Virasoro L 0 level-truncation computations, we evaluate the open bosonic string field theory action up to level (10 , 30). Extremizing this level-truncated potential, we construct a numerical solution for tachyon condensation in Schnabl gauge. We find that the energy associated to the numerical solution overshoots the expected value -1 at level L = 6. Extrapolating the level-truncation data for L ≤ 10 to estimate the vacuum energies for L > 10, we predict that the energy reaches a minimum value at L ˜ 12, and then turns back to approach -1 asymptotically as L → ∞. Furthermore, we analyze the tachyon vacuum expectation value (vev), for which by extrapolating its corresponding level-truncation data, we predict that the tachyon vev reaches a minimum value at L ˜ 26, and then turns back to approach the expected analytical result as L → ∞.

  11. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  12. TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, wemore » discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.« less

  13. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  14. Must We Embody Context?

    PubMed

    Hahn, Barbara

    The essays in this forum brace this meditation on the historiography of technology. Understanding devices incorporates the context of any particular hardware, as John Staudenmaier showed by quantifying the contents of the first decades of Technology and Culture. As contextualist approaches have widened from systems theory through social construction and into the assemblages of actor-network theory, the discipline has kept artifacts at the analytical center: it is the history of technology that scholars seek to understand. Even recognizing that the machine only embodies the technology, the discipline has long sought to explain the machine. These essays invite consideration of how the history of technology might apply to non-corporeal things-methods as well as machines, and all the worldly phenomena that function in technological ways even without physicality. Materiality is financial as well as corporeal, the history of capitalism reminds us, and this essay urges scholars to apply history-of-technology approaches more broadly.

  15. Principles for the dynamic maintenance of cortical polarity

    PubMed Central

    Marco, Eugenio; Wedlich-Soldner, Roland; Li, Rong; Altschuler, Steven J.; Wu, Lani F.

    2007-01-01

    Summary Diverse cell types require the ability to dynamically maintain polarized membrane protein distributions through balancing transport and diffusion. However, design principles underlying dynamically maintained cortical polarity are not well understood. Here we constructed a mathematical model for characterizing the morphology of dynamically polarized protein distributions. We developed analytical approaches for measuring all model parameters from single-cell experiments. We applied our methods to a well-characterized system for studying polarized membrane proteins: budding yeast cells expressing activated Cdc42. We found that balanced diffusion and colocalized transport to and from the plasma membrane were sufficient for accurately describing polarization morphologies. Surprisingly, the model predicts that polarized regions are defined with a precision that is nearly optimal for measured transport rates, and that polarity can be dynamically stabilized through positive feedback with directed transport. Our approach provides a step towards understanding how biological systems shape spatially precise, unambiguous cortical polarity domains using dynamic processes. PMID:17448998

  16. Identifying Mother-Child Interaction Styles Using a Person-Centered Approach.

    PubMed

    Nelson, Jackie A; O'Brien, Marion; Grimm, Kevin J; Leerkes, Esther M

    2014-05-01

    Parent-child conflict in the context of a supportive relationship has been discussed as a potentially constructive interaction pattern; the current study is the first to test this using a holistic analytic approach. Interaction styles, defined as mother-child conflict in the context of maternal sensitivity, were identified and described with demographic and stress-related characteristics of families. Longitudinal associations were tested between interaction styles and children's later social competence. Participants included 814 partnered mothers with a first-grade child. Latent profile analysis identified agreeable , dynamic , and disconnected interaction styles. Mothers' intimacy with a partner, depressive symptoms, and authoritarian childrearing beliefs, along with children's later conflict with a best friend and externalizing problems, were associated with group membership. Notably, the dynamic style, characterized by high sensitivity and high conflict, included families who experienced psychological and relational stressors. Findings are discussed with regard to how family stressors shape parent-child interaction patterns.

  17. Identifying Mother-Child Interaction Styles Using a Person-Centered Approach

    PubMed Central

    Nelson, Jackie A.; O’Brien, Marion; Grimm, Kevin J.; Leerkes, Esther M.

    2016-01-01

    Parent-child conflict in the context of a supportive relationship has been discussed as a potentially constructive interaction pattern; the current study is the first to test this using a holistic analytic approach. Interaction styles, defined as mother-child conflict in the context of maternal sensitivity, were identified and described with demographic and stress-related characteristics of families. Longitudinal associations were tested between interaction styles and children’s later social competence. Participants included 814 partnered mothers with a first-grade child. Latent profile analysis identified agreeable, dynamic, and disconnected interaction styles. Mothers’ intimacy with a partner, depressive symptoms, and authoritarian childrearing beliefs, along with children’s later conflict with a best friend and externalizing problems, were associated with group membership. Notably, the dynamic style, characterized by high sensitivity and high conflict, included families who experienced psychological and relational stressors. Findings are discussed with regard to how family stressors shape parent-child interaction patterns. PMID:28751818

  18. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  19. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, F. D.; Chen, Y.; Singha, K.

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.

  20. Moment inference from tomograms

    USGS Publications Warehouse

    Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini

    2007-01-01

    Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.

  1. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education.

    PubMed

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution's curriculum, including tools for unveiling relationships inside curricular datasets. We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom's taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining.

  2. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education

    PubMed Central

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    Background No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution’s curriculum, including tools for unveiling relationships inside curricular datasets. Objective We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. Methods We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom’s taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. Results We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. Conclusions We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining. PMID:26624281

  3. Learning Analytics Considered Harmful

    ERIC Educational Resources Information Center

    Dringus, Laurie P.

    2012-01-01

    This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…

  4. Teaching Analytical Chemistry to Pharmacy Students: A Combined, Iterative Approach

    ERIC Educational Resources Information Center

    Masania, Jinit; Grootveld, Martin; Wilson, Philippe B.

    2018-01-01

    Analytical chemistry has often been a difficult subject to teach in a classroom or lecture-based context. Numerous strategies for overcoming the inherently practical-based difficulties have been suggested, each with differing pedagogical theories. Here, we present a combined approach to tackling the problem of teaching analytical chemistry, with…

  5. Analytical study of the heat loss attenuation by clothing on thermal manikins under radiative heat loads.

    PubMed

    Den Hartog, Emiel A; Havenith, George

    2010-01-01

    For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.

  6. Free-form surface design method for a collimator TIR lens.

    PubMed

    Tsai, Chung-Yu

    2016-04-01

    A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems.

  7. Use of measurement theory for operationalization and quantification of psychological constructs in systems dynamics modelling

    NASA Astrophysics Data System (ADS)

    Fitkov-Norris, Elena; Yeghiazarian, Ara

    2016-11-01

    The analytical tools available to social scientists have traditionally been adapted from tools originally designed for analysis of natural science phenomena. This article discusses the applicability of systems dynamics - a qualitative based modelling approach, as a possible analysis and simulation tool that bridges the gap between social and natural sciences. After a brief overview of the systems dynamics modelling methodology, the advantages as well as limiting factors of systems dynamics to the potential applications in the field of social sciences and human interactions are discussed. The issues arise with regards to operationalization and quantification of latent constructs at the simulation building stage of the systems dynamics methodology and measurement theory is proposed as a ready and waiting solution to the problem of dynamic model calibration, with a view of improving simulation model reliability and validity and encouraging the development of standardised, modular system dynamics models that can be used in social science research.

  8. Impact of anxiety on prefrontal cortex encoding of cognitive flexibility

    PubMed Central

    Park, Junchol; Moghaddam, Bita

    2016-01-01

    Anxiety often is studied as a stand-alone construct in laboratory models. But in the context of coping with real-life anxiety, its negative impacts extend beyond aversive feelings and involve disruptions in ongoing goal-directed behaviors and cognitive functioning. Critical examples of cognitive constructs affected by anxiety are cognitive flexibility and decision making. In particular, anxiety impedes the ability to shift flexibly between strategies in response to changes in task demands, as well as the ability to maintain a strategy in the presence of distractors. The brain region most critically involved in behavioral flexibility is the prefrontal cortex (PFC), but little is known about how anxiety impacts PFC encoding of internal and external events that are critical for flexible behavior. Here we review animal and human neurophysiological and neuroimaging studies implicating PFC neural processing in anxiety-induced deficits in cognitive flexibility. We then suggest experimental and analytical approaches for future studies to gain a better mechanistic understanding of impaired cognitive inflexibility in anxiety and related disorders. PMID:27316551

  9. Accurate adiabatic singlet-triplet gaps in atoms and molecules employing the third-order spin-flip algebraic diagrammatic construction scheme for the polarization propagator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefrancois, Daniel; Dreuw, Andreas, E-mail: dreuw@uni-heidelberg.de; Rehn, Dirk R.

    For the calculation of adiabatic singlet-triplet gaps (STG) in diradicaloid systems the spin-flip (SF) variant of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator in third order perturbation theory (SF-ADC(3)) has been applied. Due to the methodology of the SF approach the singlet and triplet states are treated on an equal footing since they are part of the same determinant subspace. This leads to a systematically more accurate description of, e.g., diradicaloid systems than with the corresponding non-SF single-reference methods. Furthermore, using analytical excited state gradients at ADC(3) level, geometry optimizations of the singlet and triplet states weremore » performed leading to a fully consistent description of the systems, leading to only small errors in the calculated STGs ranging between 0.6 and 2.4 kcal/mol with respect to experimental references.« less

  10. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  11. Autistic phenomena in The Adventures of Pinocchio.

    PubMed

    Smith, Adrian

    2017-04-01

    This paper seeks to demonstrate that the protagonist of Carlo Collodi's The Adventures of Pinocchio illustrates numerous autistic phenomena such as communication difficulties, sensory and perceptual distortions and mindblindness. While Pinocchio is viewed as a literary construct with contraindications of autism, it will be argued that his autistic traits are sufficient to suggest the possibility that Collodi had a partial intuition of the syndrome 60 years before it was identified by Leo Kanner. Approaching Collodi's text in this manner is taken as an opportunity to survey and reflect upon the psychoanalytic literature on autism and to position it in relation to contemporary theories from cognitive neuroscience. © 2017, The Society of Analytical Psychology.

  12. Process for Selecting System Level Assessments for Human System Technologies

    NASA Technical Reports Server (NTRS)

    Watts, James; Park, John

    2006-01-01

    The integration of many life support systems necessary to construct a stable habitat is difficult. The correct identification of the appropriate technologies and corresponding interfaces is an exhaustive process. Once technologies are selected secondary issues such as mechanical and electrical interfaces must be addressed. The required analytical and testing work must be approached in a piecewise fashion to achieve timely results. A repeatable process has been developed to identify and prioritize system level assessments and testing needs. This Assessment Selection Process has been defined to assess cross cutting integration issues on topics at the system or component levels. Assessments are used to identify risks, encourage future actions to mitigate risks, or spur further studies.

  13. Measurement Models for Reasoned Action Theory.

    PubMed

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-03-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  14. Phases and stability of non-uniform black strings

    NASA Astrophysics Data System (ADS)

    Emparan, Roberto; Luna, Raimon; Martínez, Marina; Suzuki, Ryotaku; Tanabe, Kentaro

    2018-05-01

    We construct solutions of non-uniform black strings in dimensions from D ≈ 9 all the way up to D = ∞, and investigate their thermodynamics and dynamical stability. Our approach employs the large- D perturbative expansion beyond the leading order, including corrections up to 1 /D 4. Combining both analytical techniques and relatively simple numerical solution of ODEs, we map out the ranges of parameters in which non-uniform black strings exist in each dimension and compute their thermodynamics and quasinormal modes with accuracy. We establish with very good precision the existence of Sorkin's critical dimension and we prove that not only the thermodynamic stability, but also the dynamic stability of the solutions changes at it.

  15. Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions

    NASA Astrophysics Data System (ADS)

    Ufimtcev, E. M.

    2017-11-01

    The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.

  16. Emergent FDA biodefense issues for microarray technology: process analytical technology.

    PubMed

    Weinberg, Sandy

    2004-11-01

    A successful biodefense strategy relies upon any combination of four approaches. A nation can protect its troops and citizenry first by advanced mass vaccination, second, by responsive ring vaccination, and third, by post-exposure therapeutic treatment (including vaccine therapies). Finally, protection can be achieved by rapid detection followed by exposure limitation (suites and air filters) or immediate treatment (e.g., antibiotics, rapid vaccines and iodine pills). All of these strategies rely upon or are enhanced by microarray technologies. Microarrays can be used to screen, engineer and test vaccines. They are also used to construct early detection tools. While effective biodefense utilizes a variety of tactical tools, microarray technology is a valuable arrow in that quiver.

  17. Impedance approach to designing efficient vibration energy absorbers

    NASA Astrophysics Data System (ADS)

    Bobrovnitskii, Y. I.; Morozov, K. D.; Tomilina, T. M.

    2017-03-01

    The concept introduced previously by the authors on the best sound absorber having the maximum allowable efficiency in absorbing the energy of an incident sound field has been extended to arbitrary linear elastic media and structures. Analytic relations have been found for the input impedance characteristics that the best vibrational energy absorber should have. The implementation of these relations is the basis of the proposed impedance method of designing efficient vibration and noise absorbers. We present the results of a laboratory experiment that confirms the validity of the obtained theoretical relations, and we construct the simplest best vibration absorber. We also calculate the parameters and demonstrate the efficiency of a dynamic vibration absorber as the best absorber.

  18. Advances in the Control System for a High Precision Dissolved Organic Carbon Analyzer

    NASA Astrophysics Data System (ADS)

    Liao, M.; Stubbins, A.; Haidekker, M.

    2017-12-01

    Dissolved organic carbon (DOC) is a master variable in aquatic ecosystems. DOC in the ocean is one of the largest carbon stores on earth. Studies of the dynamics of DOC in the ocean and other low DOC systems (e.g. groundwater) are hindered by the lack of high precision (sub-micromolar) analytical techniques. Results are presented from efforts to construct and optimize a flow-through, wet chemical DOC analyzer. This study focused on the design, integration and optimization of high precision components and control systems required for such a system (mass flow controller, syringe pumps, gas extraction, reactor chamber with controlled UV and temperature). Results of the approaches developed are presented.

  19. Dynamic thermoregulation of the sample in flow cytometry.

    PubMed

    Graves, Steven W; Habbersett, Robert C; Nolan, John P

    2002-05-01

    Fine control of temperature is an important capability for any analytical platform. A circulating water bath has been the traditional means of maintaining constant temperature in the sample chamber of a flow cytometer, but this approach does not permit rapid changes in sample temperature. This unit explains the use of Peltier modules for regulation of sample temperature. The heat pumping generated by the passage of current through properly matched semiconductors, known as the Peltier effect, makes it possible for these thermoelectric modules to both heat and cool. The authors describe the construction of a Peltier module based thermoregulation unit in step-by-step detail and present a demonstration of flow cytometry measurements as a function of temperature.

  20. "Have you seen your aura lately?" examining boundary-work in holistic health pamphlets.

    PubMed

    Ho, Evelyn Y

    2007-01-01

    An increasing number of people in the United States are using holistic therapies. Both encouraging and informing this trend in growth, printed leaflets are a popular and important medium for holistic health practitioners. Using a discourse analytic approach, the author analyzed pamphlets and printed texts distributed at a holistic health fair. These texts reflect and construct specific understandings of holistic health and proper health care. Understood through the notion of boundary-work, pamphlets demarcated holism as the proper way of conceptualizing health and health care. However, holistic medicine's boundaries are quite porous, as these practices are also legitimized through the use of scientific conventions and the practice of integration, both commonly associated with biomedicine.

  1. Exploring General Versus Task-Specific Assessments of Metacognition in University Chemistry Students: A Multitrait-Multimethod Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chia-Yu

    2015-08-01

    The purpose of this study was to use multiple assessments to investigate the general versus task-specific characteristics of metacognition in dissimilar chemistry topics. This mixed-method approach investigated the nature of undergraduate general chemistry students' metacognition using four assessments: a self-report questionnaire, assessment of concurrent metacognitive skills, confidence judgment, and calibration accuracy. Data were analyzed using a multitrait-multimethod correlation matrix, supplemented with regression analyses, and qualitative interpretation. Significant correlations among task performance, calibration accuracy, and concurrent metacognition within a task suggest a converging relationship. Confidence judgment, however, was not associated with task performance or the other metacognitive measurements. The results partially support hypotheses of both general and task-specific metacognition. However, general and task-specific properties of metacognition were detected using different assessments. Case studies were constructed for two participants to illustrate how concurrent metacognition varied within different task demands. Considerations of how each assessment may appropriate different metacognitive constructs and the importance of the alignment of analytical constructs when using multiple assessments are discussed. These results may help lead to improvements in metacognition assessment and may provide insights into designs of effective metacognitive instruction.

  2. Safety risk assessment using analytic hierarchy process (AHP) during planning and budgeting of construction projects.

    PubMed

    Aminbakhsh, Saman; Gunduz, Murat; Sonmez, Rifat

    2013-09-01

    The inherent and unique risks on construction projects quite often present key challenges to contractors. Health and safety risks are among the most significant risks in construction projects since the construction industry is characterized by a relatively high injury and death rate compared to other industries. In construction project management, safety risk assessment is an important step toward identifying potential hazards and evaluating the risks associated with the hazards. Adequate prioritization of safety risks during risk assessment is crucial for planning, budgeting, and management of safety related risks. In this paper, a safety risk assessment framework is presented based on the theory of cost of safety (COS) model and the analytic hierarchy process (AHP). The main contribution of the proposed framework is that it presents a robust method for prioritization of safety risks in construction projects to create a rational budget and to set realistic goals without compromising safety. The framework provides a decision tool for the decision makers to determine the adequate accident/injury prevention investments while considering the funding limits. The proposed safety risk framework is illustrated using a real-life construction project and the advantages and limitations of the framework are discussed. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  3. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior.

    PubMed

    Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D

    2016-08-01

    Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Cotinine analytical workshop report: consideration of analytical methods for determining cotinine in human body fluids as a measure of passive exposure to tobacco smoke.

    PubMed Central

    Watts, R R; Langone, J J; Knight, G J; Lewtas, J

    1990-01-01

    A two-day technical workshop was convened November 10-11, 1986, to discuss analytical approaches for determining trace amounts of cotinine in human body fluids resulting from passive exposure to environmental tobacco smoke (ETS). The workshop, jointly sponsored by the U.S. Environmental Protection Agency and Centers for Disease Control, was attended by scientists with expertise in cotinine analytical methodology and/or conduct of human monitoring studies related to ETS. The workshop format included technical presentations, separate panel discussions on chromatography and immunoassay analytical approaches, and group discussions related to the quality assurance/quality control aspects of future monitoring programs. This report presents a consensus of opinion on general issues before the workshop panel participants and also a detailed comparison of several analytical approaches being used by the various represented laboratories. The salient features of the chromatography and immunoassay analytical methods are discussed separately. PMID:2190812

  5. Capture of free-floating planets by planetary systems

    NASA Astrophysics Data System (ADS)

    Goulinski, Nadav; Ribak, Erez N.

    2018-01-01

    Evidence of exoplanets with orbits that are misaligned with the spin of the host star may suggest that not all bound planets were born in the protoplanetary disc of their current planetary system. Observations have shown that free-floating Jupiter-mass objects can exceed the number of stars in our Galaxy, implying that capture scenarios may not be so rare. To address this issue, we construct a three-dimensional simulation of a three-body scattering between a free-floating planet and a star accompanied by a Jupiter-mass bound planet. We distinguish between three different possible scattering outcomes, where the free-floating planet may get weakly captured after the brief interaction with the binary, remain unbound or 'kick out' the bound planet and replace it. The simulation was performed for different masses of the free-floating planets and stars, as well as different impact parameters, inclination angles and approach velocities. The outcome statistics are used to construct an analytical approximation of the cross-section for capturing a free-floating planet by fitting their dependence on the tested variables. The analytically approximated cross-section is used to predict the capture rate for these kinds of objects, and to estimate that about 1 per cent of all stars are expected to experience a temporary capture of a free-floating planet during their lifetime. Finally, we propose additional physical processes that may increase the capture statistics and whose contribution should be considered in future simulations in order to determine the fate of the temporarily captured planets.

  6. Toward a definition of intolerance of uncertainty: a review of factor analytical studies of the Intolerance of Uncertainty Scale.

    PubMed

    Birrell, Jane; Meares, Kevin; Wilkinson, Andrew; Freeston, Mark

    2011-11-01

    Since its emergence in the early 1990s, a narrow but concentrated body of research has developed examining the role of intolerance of uncertainty (IU) in worry, and yet we still know little about its phenomenology. In an attempt to clarify our understanding of this construct, this paper traces the way in which our understanding and definition of IU have evolved throughout the literature. This paper also aims to further our understanding of IU by exploring the latent variables measures by the Intolerance of Uncertainty Scale (IUS; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994). A review of the literature surrounding IU confirmed that the current definitions are categorical and lack specificity. A critical review of existing factor analytic studies was carried out in order to determine the underlying factors measured by the IUS. Systematic searches yielded 9 papers for review. Two factors with 12 consistent items emerged throughout the exploratory studies, and the stability of models containing these two factors was demonstrated in subsequent confirmatory studies. It is proposed that these factors represent (i) desire for predictability and an active engagement in seeking certainty, and (ii) paralysis of cognition and action in the face of uncertainty. It is suggested that these factors may represent approach and avoidance responses to uncertainty. Further research is required to confirm the construct validity of these factors and to determine the stability of this structure within clinical samples. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. An analytic-numerical method for the construction of the reference law of operation for a class of mechanical controlled systems

    NASA Astrophysics Data System (ADS)

    Mizhidon, A. D.; Mizhidon, K. A.

    2017-04-01

    An analytic-numerical method for the construction of a reference law of operation for a class of dynamic systems describing vibrations in controlled mechanical systems is proposed. By the reference law of operation of a system, we mean a law of the system motion that satisfies all the requirements for the quality and design features of the system under permanent external disturbances. As disturbances, we consider polyharmonic functions with known amplitudes and frequencies of the harmonics but unknown initial phases. For constructing the reference law of motion, an auxiliary optimal control problem is solved in which the cost function depends on a weighting coefficient. The choice of the weighting coefficient ensures the design of the reference law. Theoretical foundations of the proposed method are given.

  8. Predicting the Development of Analytical and Creative Abilities in Upper Elementary Grades

    ERIC Educational Resources Information Center

    Gubbels, Joyce; Segers, Eliane; Verhoeven, Ludo

    2017-01-01

    In some models, intelligence has been described as a multidimensional construct comprising both analytical and creative abilities. In addition, intelligence is considered to be dynamic rather than static. A structural equation model was used to examine the predictive role of cognitive (visual short-term memory, verbal short-term memory, selective…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendron, R.; Engebrecht, C.

    The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.

  10. Promising Ideas for Collective Advancement of Communal Knowledge Using Temporal Analytics and Cluster Analysis

    ERIC Educational Resources Information Center

    Lee, Alwyn Vwen Yen; Tan, Seng Chee

    2017-01-01

    Understanding ideas in a discourse is challenging, especially in textual discourse analysis. We propose using temporal analytics with unsupervised machine learning techniques to investigate promising ideas for the collective advancement of communal knowledge in an online knowledge building discourse. A discourse unit network was constructed and…

  11. Triple Helix Systems: An Analytical Framework for Innovation Policy and Practice in the Knowledge Society

    ERIC Educational Resources Information Center

    Ranga, Marina; Etzkowitz, Henry

    2013-01-01

    This paper introduces the concept of Triple Helix systems as an analytical construct that synthesizes the key features of university--industry--government (Triple Helix) interactions into an "innovation system" format, defined according to systems theory as a set of components, relationships and functions. Among the components of Triple…

  12. Defining and Measuring Engagement and Learning in Science: Conceptual, Theoretical, Methodological, and Analytical Issues

    ERIC Educational Resources Information Center

    Azevedo, Roger

    2015-01-01

    Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring…

  13. The European Gender Equality Index: Conceptual and Analytical Issues

    ERIC Educational Resources Information Center

    Bericat, Eduardo

    2012-01-01

    This article presents a composite indicator designed to measure and compare existing structural gender equality in the countries of the European Union. The construction of an index is always a complex task which requires making a great many important conceptual, analytical and empirical decisions. This complexity explains the wide variety of…

  14. Construction of a Polyaniline Nanofiber Gas Sensor

    ERIC Educational Resources Information Center

    Virji, Shabnam; Weiller, Bruce H.; Huang, Jiaxing; Blair, Richard; Shepherd, Heather; Faltens, Tanya; Haussmann, Philip C.; Kaner, Richard B.; Tolbert, Sarah H.

    2008-01-01

    The electrical properties of polyaniline changes by orders of magnitude upon exposure to analytes such as acids or bases, making it a useful material for detection of these analytes in the gas phase. The objectives of this lab are to synthesize different diameter polyaniline nanofibers and compare them as sensor materials. In this experiment…

  15. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  16. Data Acquisition Programming (LabVIEW): An Aid to Teaching Instrumental Analytical Chemistry.

    ERIC Educational Resources Information Center

    Gostowski, Rudy

    A course was developed at Austin Peay State University (Tennessee) which offered an opportunity for hands-on experience with the essential components of modern analytical instruments. The course aimed to provide college students with the skills necessary to construct a simple model instrument, including the design and fabrication of electronic…

  17. Understanding Customer Product Choices: A Case Study Using the Analytical Hierarchy Process

    Treesearch

    Robert L. Smith; Robert J. Bush; Daniel L. Schmoldt

    1996-01-01

    The Analytical Hierarchy Process (AHP) was used to characterize the bridge material selection decisions of highway officials across the United States. Understanding product choices by utilizing the AHP allowed us to develop strategies for increasing the use of timber in bridge construction. State Department of Transportation engineers, private consulting engineers, and...

  18. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases.

    PubMed

    Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-05-01

    To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  19. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    NASA Astrophysics Data System (ADS)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  20. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    NASA Astrophysics Data System (ADS)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  1. Modelling shoreline evolution in the vicinity of a groyne and a river

    NASA Astrophysics Data System (ADS)

    Valsamidis, Antonios; Reeve, Dominic E.

    2017-01-01

    Analytical solutions to the equations governing shoreline evolution are well-known and have value both as pedagogical tools and for conceptual design. Nevertheless, solutions have been restricted to a fairly narrow class of conditions with limited applicability to real-life situations. We present a new analytical solution for a widely encountered situation where a groyne is constructed close to a river to control sediment movement. The solution, which employs Laplace transforms, has the advantage that a solution for time-varying conditions may be constructed from the solution for constant conditions by means of the Heaviside procedure. Solutions are presented for various combinations of wave conditions and sediment supply/removal by the river. An innovation introduced in this work is the capability to provide an analytical assessment of the accretion or erosion caused near the groyne due to its proximity to the river which may act either as a source or a sink of sediment material.

  2. A qualitative approach to assessing work ability.

    PubMed

    Tengland, Per-Anders

    2013-01-01

    We often need to be able to assess the extent to which individuals have or lack work ability. For this we need instruments. Most of the instruments available have flaws. They either lack validity or they use roundabout methods when collecting information about the individual's work ability. The aim of this paper is to present a conceptual model for constructing a questionnaire that can be used for assessing work ability. The methods used are philosophical, i.e. analytical and deductive. A conceptual theory is provided, and based on the resulting definition of the concept of "work ability" conclusions are drawn regarding how to assess work ability. When constructing quantitative instruments, we can increase validity through using a more strict definition of work ability. However, such an approach will only solve some of the problems noted above. The proposal is, instead, to create a qualitative questionnaire, founded on a definition of "work ability", which focuses on the concrete problems concerning the work ability of the individual. Finally, a sketch of such an instrument is provided, with questions covering all the relevant aspects of work ability. The qualitative questionnaire proposed is believed to be superior to more traditional (quantitative) instruments for assessing a person's work ability, as well as for finding solutions to her problems concerning work ability.

  3. Evidence for a nonplanar amplituhedron

    DOE PAGES

    Bern, Zvi; Herrmann, Enrico; Litsey, Sean; ...

    2016-06-17

    The scattering amplitudes of planar N = 4 super-Yang-Mills exhibit a number of remarkable analytic structures, including dual conformal symmetry and logarithmic singularities of integrands. The amplituhedron is a geometric construction of the integrand that incorporates these structures. This geometric construction further implies the amplitude is fully specified by constraining it to vanish on spurious residues. By writing the amplitude in a dlog basis, we provide nontrivial evidence that these analytic properties and “zero conditions” carry over into the nonplanar sector. Finally, this suggests that the concept of the amplituhedron can be extended to the nonplanar sector of N =more » 4 super-Yang-Mills theory.« less

  4. Studies on the chemical stability and functional group compatibility of the benzoin photolabile safety-catch linker using an analytical construct.

    PubMed

    Cano, Montserrat; Ladlow, Mark; Balasubramanian, Shankar

    2002-01-01

    A chemical stability study of the benzoin photolabile safety-catch linker (BPSC) has been carried out using a dual-linker analytical construct to establish its compatibility with a range of commonly employed solid-phase reaction conditions. As a result of this study, the dithiane-protected benzoin linker was shown to be reactive only toward strong acids and fluoride nucleophile. Furthermore, a scan of diverse functional groups thought to be unstable toward the safety-catch removal conditions has also been carried out. These data should provide assistance in future utilization of the BPSC for syntheses.

  5. Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.

    PubMed

    Corcoran, Timothy C

    2018-03-01

    In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.

  6. The Timing and Construction of Preference: A Quantitative Study

    ERIC Educational Resources Information Center

    Kendrick, Kobin H.; Torreira, Francisco

    2015-01-01

    Conversation-analytic research has argued that the timing and construction of preferred responding actions (e.g., acceptances) differ from that of dispreferred responding actions (e.g., rejections), potentially enabling early response prediction by recipients. We examined 195 preferred and dispreferred responding actions in telephone corpora and…

  7. Focal Event, Contextualization, and Effective Communication in the Mathematics Classroom

    ERIC Educational Resources Information Center

    Nilsson, Per; Ryve, Andreas

    2010-01-01

    The aim of this article is to develop analytical tools for studying mathematical communication in collaborative activities. The theoretical construct of contextualization is elaborated methodologically in order to study diversity in individual thinking in relation to effective communication. The construct of contextualization highlights issues of…

  8. Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing

    DTIC Science & Technology

    2017-06-16

    Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing Sarah A. Blackstock Joseph O...December 2017 4. TITLE AND SUBTITLE Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III...Navy’s Phase III Study Areas as described in each Environmental Impact Statement/ Overseas Environmental Impact Statement and describes the methods

  9. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Construction of Hamiltonians by supervised learning of energy and entanglement spectra

    NASA Astrophysics Data System (ADS)

    Fujita, Hiroyuki; Nakagawa, Yuya O.; Sugiura, Sho; Oshikawa, Masaki

    2018-02-01

    Correlated many-body problems ubiquitously appear in various fields of physics such as condensed matter, nuclear, and statistical physics. However, due to the interplay of the large number of degrees of freedom, it is generically impossible to treat these problems from first principles. Thus the construction of a proper model, namely, effective Hamiltonian, is essential. Here, we propose a simple supervised learning algorithm for constructing Hamiltonians from given energy or entanglement spectra. We apply the proposed scheme to the Hubbard model at the half-filling, and compare the obtained effective low-energy spin model with several analytic results based on the high-order perturbation theory, which have been inconsistent with each other. We also show that our approach can be used to construct the entanglement Hamiltonian of a quantum many-body state from its entanglement spectrum as well. We exemplify this using the ground states of the S =1 /2 two-leg Heisenberg ladders. We observe a qualitative difference between the entanglement Hamiltonians of the two phases (the Haldane and the rung singlet phase) of the model due to the different origin of the entanglement. In the Haldane phase, we find that the entanglement Hamiltonian is nonlocal by nature, and the locality can be restored by introducing the anisotropy and turning the ground state into the large-D phase. Possible applications to the model construction from experimental data and to various problems of strongly correlated systems are discussed.

  11. The Global Experience of Deployment of Energy-Efficient Technologies in High-Rise Construction

    NASA Astrophysics Data System (ADS)

    Potienko, Natalia D.; Kuznetsova, Anna A.; Solyakova, Darya N.; Klyueva, Yulia E.

    2018-03-01

    The objective of this research is to examine issues related to the increasing importance of energy-efficient technologies in high-rise construction. The aim of the paper is to investigate modern approaches to building design that involve implementation of various energy-saving technologies in diverse climates and at different structural levels, including the levels of urban development, functionality, planning, construction and engineering. The research methodology is based on the comprehensive analysis of the advanced global expertise in the design and construction of energy-efficient high-rise buildings, with the examination of their positive and negative features. The research also defines the basic principles of energy-efficient architecture. Besides, it draws parallels between the climate characteristics of countries that lead in the field of energy-efficient high-rise construction, on the one hand, and the climate in Russia, on the other, which makes it possible to use the vast experience of many countries, wholly or partially. The paper also gives an analytical review of the results arrived at by implementing energy efficiency principles into high-rise architecture. The study findings determine the impact of energy-efficient technologies on high-rise architecture and planning solutions. In conclusion, the research states that, apart from aesthetic and compositional interpretation of architectural forms, an architect nowadays has to address the task of finding a synthesis between technological and architectural solutions, which requires knowledge of advanced technologies. The study findings reveal that the implementation of modern energy-efficient technologies into high-rise construction is of immediate interest and is sure to bring long-term benefits.

  12. Selectively Sized Graphene-Based Nanopores for in Situ Single Molecule Sensing

    PubMed Central

    2015-01-01

    The use of nanopore biosensors is set to be extremely important in developing precise single molecule detectors and providing highly sensitive advanced analysis of biological molecules. The precise tailoring of nanopore size is a significant step toward achieving this, as it would allow for a nanopore to be tuned to a corresponding analyte. The work presented here details a methodology for selectively opening nanopores in real-time. The tunable nanopores on a quartz nanopipette platform are fabricated using the electroetching of a graphene-based membrane constructed from individual graphene nanoflakes (ø ∼30 nm). The device design allows for in situ opening of the graphene membrane, from fully closed to fully opened (ø ∼25 nm), a feature that has yet to be reported in the literature. The translocation of DNA is studied as the pore size is varied, allowing for subfeatures of DNA to be detected with slower DNA translocations at smaller pore sizes, and the ability to observe trends as the pore is opened. This approach opens the door to creating a device that can be target to detect specific analytes. PMID:26204996

  13. End-point detection in potentiometric titration by continuous wavelet transform.

    PubMed

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  14. A ricin forensic profiling approach based on a complex set of biomarkers.

    PubMed

    Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister

    2018-08-15

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A Discrete Event Simulation Model for Evaluating the Performances of an M/G/C/C State Dependent Queuing System

    PubMed Central

    Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037

  16. Steady-state groundwater recharge in trapezoidal-shaped aquifers: A semi-analytical approach based on variational calculus

    NASA Astrophysics Data System (ADS)

    Mahdavi, Ali; Seyyedian, Hamid

    2014-05-01

    This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.

  17. Selecting Health Care Improvement Projects: A Methodology Integrating Cause-and-Effect Diagram and Analytical Hierarchy Process.

    PubMed

    Testik, Özlem Müge; Shaygan, Amir; Dasdemir, Erdi; Soydan, Guray

    It is often vital to identify, prioritize, and select quality improvement projects in a hospital. Yet, a methodology, which utilizes experts' opinions with different points of view, is needed for better decision making. The proposed methodology utilizes the cause-and-effect diagram to identify improvement projects and construct a project hierarchy for a problem. The right improvement projects are then prioritized and selected using a weighting scheme of analytical hierarchy process by aggregating experts' opinions. An approach for collecting data from experts and a graphical display for summarizing the obtained information are also provided. The methodology is implemented for improving a hospital appointment system. The top-ranked 2 major project categories for improvements were identified to be system- and accessibility-related causes (45%) and capacity-related causes (28%), respectively. For each of the major project category, subprojects were then ranked for selecting the improvement needs. The methodology is useful in cases where an aggregate decision based on experts' opinions is expected. Some suggestions for practical implementations are provided.

  18. Recursively constructing analytic expressions for equilibrium distributions of stochastic biochemical reaction networks.

    PubMed

    Meng, X Flora; Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M

    2017-05-01

    Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. © 2017 The Author(s).

  19. Recursively constructing analytic expressions for equilibrium distributions of stochastic biochemical reaction networks

    PubMed Central

    Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M.

    2017-01-01

    Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. PMID:28566513

  20. Visual analytics for aviation safety: A collaborative approach to sensemaking

    NASA Astrophysics Data System (ADS)

    Wade, Andrew

    Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.

  1. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    NASA Astrophysics Data System (ADS)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is here briefly described.

  2. Ultrasonic assessment of service life of concrete structures subject to reinforcing steel corrosion

    NASA Astrophysics Data System (ADS)

    Udegbunam, Ogechukwu Christian

    Over half of the bridges in the United States were built before 1970. Such bridges and the network of roads that they carry include the Inter State system, which was built as part of the great public works program, following the end of the Second World War. During that era, the emphasis was on strength design and economical construction of new structures, and not much premium was placed on durability and maintainability concerns. Since the end of this construction boom in the early 1970s, the concern for the durability of transportation infrastructure has steadily gained prominence among those agencies that must secure, program and administer funds for maintaining highway networks. The objective of this research was to develop a nondestructive method of assessing the durability of concrete bridge decks susceptible to damage from corrosion of embedded reinforcing steel. This was accomplished by formulating a holistic approach that accounts for the major factors that influence corrosion based deterioration of reinforced concrete. In this approach, the assessment of the durability of concrete bridge decks is based on a model that estimates the time it takes for the cover concrete to fail a result of stresses caused by expansion of reinforcing steel bars, due to corrosion activities. This time to failure is comprised of two distinct periods that must be evaluated before the problem can be solved. The research consisted of an experimental program and an analytical study. In the experimental program concrete specimens were cast and tested to determine their diffusivity and mechanical properties. The diffusivity was used to evaluate the period it takes for corrosion of the reinforcing bars to commence. In the analytical study, the resistance of the concrete structure against the internal forces caused by corrosion was evaluated with the finite element techniques. This resistance was used to evaluate the period defining the failure of the cover concrete. These two periods were then used to determine the service life of the structure.

  3. Climate Proxies: An Inquiry-Based Approach to Discovering Climate Change on Antarctica

    NASA Astrophysics Data System (ADS)

    Wishart, D. N.

    2016-12-01

    An attractive way to advance climate literacy in higher education is to emphasize its relevance while teaching climate change across the curriculum to science majors and non-science majors. An inquiry-based pedagogical approach was used to engage five groups of students on a "Polar Discovery Project" aimed at interpreting the paleoclimate history of ice cores from Antarctica. Learning objectives and student learning outcomes were clearly defined. Students were assigned several exercises ranging from examination of Antarctic topography to the application of physical and chemical measurements as proxies for climate change. Required materials included base and topographic maps of Antarctica; graph sheets for construction of topographic cross-sectional profiles from profile lines of the Western Antarctica Ice Sheet (WAIS) Divide and East Antarctica; high-resolution photographs of Antarctic ice cores; stratigraphic columns of ice cores; borehole and glaciochemical data (i.e. anions, actions, δ18O, δD etc.); and isotope data on greenhouse gases (CH4, O2, N2) extracted from gas bubbles in ice cores. The methodology was to engage students in (2) construction of topographic profiles; (2) suggest directions for ice flow based on simple physics; (3) formulate decisions on suitable locations for drilling ice cores; (4) visual ice stratigraphy including ice layer counting; (5) observation of any insoluble particles (i.e. meteoritic and volcanic material); (6) analysis of borehole temperature profiles; and (7) the interpretation of several datasets to derive a paleoclimate history of these areas of the continent. The overall goal of the project was to improve the students analytical and quantitative skills; their ability to evaluate relationships between physical and chemical properties in ice cores, and to advance the understanding the impending consequences of climate change while engaging science, technology, engineering and mathematics (STEM). Student learning outcomes were assessed at the completion of the `Polar Discovery Project' for their curiosity, analytical strength, creativity, group collaboration, problem-solving, innovation, and interest in level climate change and the implications of the its effects on polar regions.

  4. On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve

    1992-01-01

    It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.

  5. The Separation of Between-person and Within-person Components of Individual Change Over Time: A Latent Curve Model with Structured Residuals

    PubMed Central

    Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.

    2014-01-01

    Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798

  6. Constructing the collective unconscious.

    PubMed

    Gullatz, Stefan

    2010-11-01

    Innovative attempts at collating Jungian analytical psychology with a range of 'post-modern' theories have yielded significant results. This paper adopts an alternative strategy: a Lacanian vantage point on Jungian theory that eschews an attempt at reconciling Jung with post-structuralism. A focused Lacanian gaze on Jung will establish an irreducible tension between Jung's view of archetypes as factors immanent to the psyche and a Lacanian critique that lays bare the contingent structures and mechanisms of their constitution, unveiling the supposed archetypes'a posteriori production through the efficacy of a discursive field. Theories of ideology developed in the wake of Lacan provide a powerful methodological tool allowing to bring this distinction into focus. An assembly of Lacan's fragmentary accounts of Jung will be supplemented with an approach to Jungian theory via Žižek's Lacan-oriented theory of the signifying mechanism underpinning 'ideology'. Accordingly, the Jungian archetype of the self, which is considered in some depth, can begin to be seen in a new light, namely as a 'master signifier', not only of Jung's academic edifice, but also -and initially-of the discursive strategies that establish his own subjectivity. A discussion of Jung's approach to mythology reveals how the 'quilting point' of his discourse comes to be coupled with a correlate in the Real, a non-discursive 'sublime object' conferring upon archetypes their fascinating aura. © 2010, The Society of Analytical Psychology.

  7. The generation of criteria for selecting analytical tools for landscape management

    Treesearch

    Marilyn Duffey-Armstrong

    1979-01-01

    This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...

  8. Measuring physical and mental health using the SF-12: implications for community surveys of mental health.

    PubMed

    Windsor, Timothy D; Rodgers, Bryan; Butterworth, Peter; Anstey, Kaarin J; Jorm, Anthony F

    2006-09-01

    The effects of using different approaches to scoring the SF-12 summary scales of physical and mental health were examined with a view to informing the design and interpretation of community-based survey research. Data from a population-based study of 7485 participants in three cohorts aged 20-24, 40-44 and 60-64 years were used to examine relationships among measures of physical and mental health calculated from the same items using the SF-12 and RAND-12 approaches to scoring, and other measures of chronic physical conditions and psychological distress. A measure of physical health constructed using the RAND-12 scoring showed a monotonic negative association with psychological distress as measured by the Goldberg depression and anxiety scales. However, a non-monotonic association was evident in the relationship between SF-12 physical health scores and distress, with very high SF-12 physical health scores corresponding with high levels of distress. These relationships highlight difficulties in interpretation that can arise when using the SF-12 summary scales in some analytical contexts. It is recommended that community surveys that measure physical and mental functioning using the SF-12 items generate summary scores using the RAND-12 protocol in addition to the SF-12 approach. In general, researchers should be wary of using factor scores based on orthogonal rotation, which assumes that measures are uncorrelated, to represent constructs that have an actual association.

  9. An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.

    PubMed

    Zhang, Qiang; Shi, Shengjun; Chen, Weishan

    2016-03-01

    An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  11. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  12. A Shoebox Polarimeter: An Inexpensive Analytical Tool for Teachers and Students

    ERIC Educational Resources Information Center

    Mehta, Akash; Greenbowe, Thomas J.

    2011-01-01

    A polarimeter can determine the optical activity of an organic or inorganic compound by providing information about the optical rotation of plane-polarized light when transmitted through that compound. This "Journal" has reported various construction methods for polarimeters. We report a unique construction using a shoebox, recycled office…

  13. Testing Crites' Model of Career Maturity: A Hierarchical Strategy.

    ERIC Educational Resources Information Center

    Wallbrown, Fred H.; And Others

    1986-01-01

    Investigated the construct validity of Crites' model of career maturity and the Career Maturity Inventory (CMI). Results from a nationwide sample of adolescents, using hierarchical factor analytic methodology, indicated confirmatory support for the multidimensionality of Crites' model of career maturity, and the construct validity of the CMI as a…

  14. Language and Social Identity Construction: A Study of a Russian Heritage Language Orthodox Christian School

    ERIC Educational Resources Information Center

    Moore, Ekaterina Leonidovna

    2012-01-01

    Grounded in discourse analytic and language socialization paradigms, this dissertation examines issues of language and social identity construction in children attending a Russian Heritage Language Orthodox Christian Saturday School in California. By conducting micro-analysis of naturally-occurring talk-in-interaction combined with longitudinal…

  15. The Construction of the Teacher's Authority in Pedagogic Discourse

    ERIC Educational Resources Information Center

    Wenren, Xing

    2014-01-01

    This article examines the discursive construction of the authoritative identity of teachers in relation to a number of issues in the classroom context, including identity negotiation, pedagogic discourse and teacher-student power relationship. A variety of classroom teacher talks are analyzed from a discourse analytical perspective, revealing the…

  16. Metaphor, Multiplicative Meaning and the Semiotic Construction of Scientific Knowledge

    ERIC Educational Resources Information Center

    Liu, Yu; Owyong, Yuet See Monica

    2011-01-01

    Scientific discourse is characterized by multi-semiotic construction and the resultant semantic expansions. To date, there remains a lack of analytical methods to explicate the multiplicative nature of meaning. Drawing on the theories of systemic functional linguistics, this article examines the meaning-making processes across language and…

  17. The Teachers' Perspective on Teacher Professional Development Evaluation

    ERIC Educational Resources Information Center

    Chen, Yu-Fen

    2013-01-01

    This study constructs indicators and weights that can be used in the professional development evaluation (PDE) of elementary school teachers. The indicators were constructed using data collected from literature reviews, interviews with experts, and questionnaire surveys. The Fuzzy Analytic Hierarchy Process (FAPH) was used to analyze the collected…

  18. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Formal and physical equivalence in two cases in contemporary quantum physics

    NASA Astrophysics Data System (ADS)

    Fraser, Doreen

    2017-08-01

    The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.

  20. A case history: from traumatic repetition towards psychic representability.

    PubMed

    Bichi, Estela L

    2008-06-01

    This paper is devoted principally to a case history concerning an analytic process extending over a period of almost ten years. The patient is B, who consulted the author after a traumatic episode. Although that was her reason for commencing treatment, a history of previous traumatogenic situations, including a rape during her adolescence, subsequently came to light. The author describes three stages of the treatment, reflected in three different settings in accordance with the work done by both patient and analyst in enabling B to own and work through her infantile and adult traumatic experiences. The process of transformation of traumatic traces lacking psychic representation, which was undertaken by both members of the analytic couple from the beginning of the treatment, was eventually approached in a particular way on the basis of their respective creative capacities, which facilitated the patient's psychic progress towards representability and the possibility of working through the experiences of the past. Much of the challenge of this case involved the analyst's capacity to maintain and at the same time consolidate her analytic posture within her internal setting, while doing her best to overcome any possible misfit (Balint, 1968) between her own technique and the specific complexities of the individual patient. The account illustrates the alternation of phases, at the beginning of the analysis, of remembering and interpretation on the one hand and of the representational void and construction on the other. In the case history proper and in her detailed summing up, the author refers to the place of the analyst during the analytic process, the involvement of her psychic functioning, and the importance of her capacity to work on and make use of her countertransference and self-analytic introspection, with a view to neutralizing any influence that aspects of her 'real person' might have had on the analytic field and on the complex processes taking place within it.

  1. Required, Practical, or Unnecessary? An Examination and Demonstration of Propensity Score Matching Using Longitudinal Secondary Data

    ERIC Educational Resources Information Center

    Padgett, Ryan D.; Salisbury, Mark H.; An, Brian P.; Pascarella, Ernest T.

    2010-01-01

    The sophisticated analytical techniques available to institutional researchers give them an array of procedures to estimate a causal effect using observational data. But as many quantitative researchers have discovered, access to a wider selection of statistical tools does not necessarily ensure construction of a better analytical model. Moreover,…

  2. Expanding Students' Analytical Frameworks through the Study of Graphic Novels

    ERIC Educational Resources Information Center

    Connors, Sean P.

    2015-01-01

    When teachers work with students to construct a metalanguage that they can draw on to describe and analyze graphic novels, and then invite students to apply that metalanguage in the service of composing multimodal texts of their own, teachers broaden students' analytical frameworks. In the process of doing so, teachers empower students. In this…

  3. A Comprehensive Microfluidics Device Construction and Characterization Module for the Advanced Undergraduate Analytical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Piunno, Paul A. E.; Zetina, Adrian; Chu, Norman; Tavares, Anthony J.; Noor, M. Omair; Petryayeva, Eleonora; Uddayasankar, Uvaraj; Veglio, Andrew

    2014-01-01

    An advanced analytical chemistry undergraduate laboratory module on microfluidics that spans 4 weeks (4 h per week) is presented. The laboratory module focuses on comprehensive experiential learning of microfluidic device fabrication and the core characteristics of microfluidic devices as they pertain to fluid flow and the manipulation of samples.…

  4. Teachers as Producers of Data Analytics: A Case Study of a Teacher-Focused Educational Data Science Program

    ERIC Educational Resources Information Center

    McCoy, Chase; Shih, Patrick C.

    2016-01-01

    Educational data science (EDS) is an emerging, interdisciplinary research domain that seeks to improve educational assessment, teaching, and student learning through data analytics. Teachers have been portrayed in the EDS literature as users of pre-constructed data dashboards in educational technologies, with little consideration given to them as…

  5. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  6. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  7. A New Analytic Framework for Moderation Analysis --- Moving Beyond Analytic Interactions

    PubMed Central

    Tang, Wan; Yu, Qin; Crits-Christoph, Paul; Tu, Xin M.

    2009-01-01

    Conceptually, a moderator is a variable that modifies the effect of a predictor on a response. Analytically, a common approach as used in most moderation analyses is to add analytic interactions involving the predictor and moderator in the form of cross-variable products and test the significance of such terms. The narrow scope of such a procedure is inconsistent with the broader conceptual definition of moderation, leading to confusion in interpretation of study findings. In this paper, we develop a new approach to the analytic procedure that is consistent with the concept of moderation. The proposed framework defines moderation as a process that modifies an existing relationship between the predictor and the outcome, rather than simply a test of a predictor by moderator interaction. The approach is illustrated with data from a real study. PMID:20161453

  8. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  9. Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.

  10. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  11. How Should We Screen for Depression Following a Natural Disaster? An ROC Approach to Post-Disaster Screening in Adolescents and Adults

    PubMed Central

    Cohen, Joseph R.; Adams, Zachary W.; Menon, Suvarna V.; Youngstrom, Eric A.; Bunnell, Brian E.; Acierno, Ron; Ruggiero, Kenneth J.; Danielson, Carla Kmett

    2016-01-01

    Background The present study’s aim was to provide the foundation for an efficient, empirically based protocol for depression screening following a natural disaster. Utilizing a Receiver Operating Characteristic (ROC) analytic approach, the study tested a) what specific disaster-related stressors (i.e., property damage, loss of basic services) and individual-related constructs (i.e., PTSD symptoms, trauma history, social support) conveyed the greatest risk for post-natural disaster depression, b) specific cutoff scores across these measures, and c) whether the significance or cutoff scores for each construct varied between adolescents and adults. Methods Structured phone-based clinical interviews were conducted with 2,000 adolescents who lived through a tornado and 1,543 adults who survived a hurricane. Results Findings suggested that in both adolescents and adults, individual-related constructs forecasted greater risk for depressive symptoms following a natural disaster compared to disaster-related stressors. Furthermore, trauma history and PTSD symptoms were particularly strong indicators for adolescent depressive symptoms compared to adult depressive symptoms. Adolescents and adults who reported vulnerable scores for social support, trauma history, and lifetime PTSD symptoms were approximately twice as likely to present as depressed following the natural disaster. Limitations Findings from the present study were limited to post-disaster assessments and based on self-reported functioning 6–12 months following the natural disaster. Conclusions The present study synthesizes the extensive body of research on post-disaster functioning by providing a clear framework for which questions may be most important to ask when screening for depression following a natural disaster. PMID:27259082

  12. How should we screen for depression following a natural disaster? An ROC approach to post-disaster screening in adolescents and adults.

    PubMed

    Cohen, Joseph R; Adams, Zachary W; Menon, Suvarna V; Youngstrom, Eric A; Bunnell, Brian E; Acierno, Ron; Ruggiero, Kenneth J; Danielson, Carla Kmett

    2016-09-15

    The present study's aim was to provide the foundation for an efficient, empirically based protocol for depression screening following a natural disaster. Utilizing a Receiver Operating Characteristic (ROC) analytic approach, the study tested a) what specific disaster-related stressors (i.e., property damage, loss of basic services) and individual-related constructs (i.e., PTSD symptoms, trauma history, social support) conveyed the greatest risk for post-natural disaster depression, b) specific cutoff scores across these measures, and c) whether the significance or cutoff scores for each construct varied between adolescents and adults. Structured phone-based clinical interviews were conducted with 2000 adolescents who lived through a tornado and 1543 adults who survived a hurricane. Findings suggested that in both adolescents and adults, individual-related constructs forecasted greater risk for depressive symptoms following a natural disaster compared to disaster-related stressors. Furthermore, trauma history and PTSD symptoms were particularly strong indicators for adolescent depressive symptoms compared to adult depressive symptoms. Adolescents and adults who reported vulnerable scores for social support, trauma history, and lifetime PTSD symptoms were approximately twice as likely to present as depressed following the natural disaster. Findings from the present study were limited to post-disaster assessments and based on self-reported functioning 6-12 months following the natural disaster. The present study synthesizes the extensive body of research on post-disaster functioning by providing a clear framework for which questions may be most important to ask when screening for depression following a natural disaster. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Adaptable Detection Strategies in Membrane-Based Immunoassays: Calibration-Free Quantitation with Surface-Enhanced Raman Scattering Readout.

    PubMed

    Skuratovsky, Aleksander; Soto, Robert J; Porter, Marc D

    2018-06-19

    This paper presents a method for immunometric biomarker quantitation that uses standard flow-through assay reagents and obviates the need for constructing a calibration curve. The approach relies on a nitrocellulose immunoassay substrate with multiple physical addresses for analyte capture, each modified with different amounts of an analyte-specific capture antibody. As such, each address generates a distinctly different readout signal that is proportional to the analyte concentration in the sample. To establish the feasibility of this concept, equations derived from antibody-antigen binding equilibrium were first applied in modeling experiments. Next, nitrocellulose membranes with multiple capture antibody addresses were fabricated for detection of a model analyte, human Immunoglobulin G (hIgG), by a heterogeneous sandwich immunoassay using antibody-modified gold nanoparticles (AuNPs) as the immunolabel. Counting the number of colored capture addresses visible to the unassisted eye enabled semiquantitative hIgG determination. We then demonstrated that, by leveraging the localized surface plasmon resonance of the AuNPs, surface-enhanced Raman spectroscopy (SERS) can be used for quantitative readout. By comparing the SERS signal intensities from each capture address with values predicted using immunoassay equilibrium theory, the concentration of hIgG can be determined (∼30% average absolute deviation) without reference to a calibration curve. This work also demonstrates the ability to manipulate the dynamic range of the assay over ∼4 orders of magnitude (from 2 ng mL -1 to 10 μg mL -1 ). The potential prospects in applying this concept to point-of-need diagnostics are also discussed.

  14. The Identification and Significance of Intuitive and Analytic Problem Solving Approaches Among College Physics Students

    ERIC Educational Resources Information Center

    Thorsland, Martin N.; Novak, Joseph D.

    1974-01-01

    Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…

  15. Intersubjectivity and the creation of meaning in the analytic process.

    PubMed

    Maier, Christian

    2014-11-01

    By means of a clinical illustration, the author describes how the intersubjective exchanges involved in an analytic process facilitate the representation of affects and memories which have been buried in the unconscious or indeed have never been available to consciousness. As a result of projective identificatory processes in the analytic relationship, in this example the analyst falls into a situation of helplessness which connects with his own traumatic experiences. Then he gets into a formal regression of the ego and responds with a so-to-speak hallucinatory reaction-an internal image which enables him to keep the analytic process on track and, later on, to construct an early traumatic experience of the analysand. © 2014, The Society of Analytical Psychology.

  16. Analytics that Inform the University: Using Data You Already Have

    ERIC Educational Resources Information Center

    Dziuban, Charles; Moskal, Patsy; Cavanagh, Thomas; Watts, Andre

    2012-01-01

    The authors describe the University of Central Florida's top-down/bottom-up action analytics approach to using data to inform decision-making at the University of Central Florida. The top-down approach utilizes information about programs, modalities, and college implementation of Web initiatives. The bottom-up approach continuously monitors…

  17. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach.

    PubMed

    Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy

    2016-03-01

    Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). In this proof-of-concept study, a local big data-driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. © 2015 by the Society for Academic Emergency Medicine.

  18. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data–Driven, Machine Learning Approach

    PubMed Central

    Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy

    2018-01-01

    Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). Conclusions In this proof-of-concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. PMID:26679719

  19. A Fixed-point Scheme for the Numerical Construction of Magnetohydrostatic Atmospheres in Three Dimensions

    NASA Astrophysics Data System (ADS)

    Gilchrist, S. A.; Braun, D. C.; Barnes, G.

    2016-12-01

    Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.

  20. Squeezed states and Hermite polynomials in a complex variable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, S. Twareque, E-mail: twareque.ali@concordia.ca; Górska, K., E-mail: katarzyna.gorska@ifj.edu.pl; Horzela, A., E-mail: andrzej.horzela@ifj.edu.pl

    2014-01-15

    Following the lines of the recent paper of J.-P. Gazeau and F. H. Szafraniec [J. Phys. A: Math. Theor. 44, 495201 (2011)], we construct here three types of coherent states, related to the Hermite polynomials in a complex variable which are orthogonal with respect to a non-rotationally invariant measure. We investigate relations between these coherent states and obtain the relationship between them and the squeezed states of quantum optics. We also obtain a second realization of the canonical coherent states in the Bargmann space of analytic functions, in terms of a squeezed basis. All this is done in the flavormore » of the classical approach of V. Bargmann [Commun. Pure Appl. Math. 14, 187 (1961)].« less

  1. Optimal consensus algorithm integrated with obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Wang, Jianan; Xin, Ming

    2013-01-01

    This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.

  2. [Marital life experiences: women's positioning].

    PubMed

    Souto, Cláudia Maria Ramos Medeiros; Braga, Violante Augusta Batista

    2009-01-01

    A study of qualitative approach, carried out with eleven women in a marital violence situation. Empirical data were produced from workshops, focusing on the understanding of violence experience through the women's speech. In order to compose analytic categories we used the technique of content's thematic analysis. The analysis was done based on constructs of gender categories present in the daily life of those women. Results showed that marital violence represents to the women fear and imprisonment and that since within a marital status the woman is more susceptible to undergo unfair relations of power with male dominance and legitimation of violence. In the women's speech became evident behaviors and attributes that support the feminine condition of subjection to the spouse and to violence.

  3. Measurement Models for Reasoned Action Theory

    PubMed Central

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach. PMID:23243315

  4. The advancement of the built environment research through employment of structural equation modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Wasilah, S.; Fahmyddin, T.

    2018-03-01

    The employment of structural equation modeling (SEM) in research has taken an increasing attention in among researchers in built environment. There is a gap to understand the attributes, application, and importance of this approach in data analysis in built environment study. This paper intends to provide fundamental comprehension of SEM method in data analysis, unveiling attributes, employment and significance and bestow cases to assess associations amongst variables and constructs. The study uses some main literature to grasp the essence of SEM regarding with built environment research. The better acknowledgment of this analytical tool may assist the researcher in the built environment to analyze data under complex research questions and to test multivariate models in a single study.

  5. Mass-imbalanced Hubbard model in optical lattice with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Le, Duc-Anh; Tran, Thi-Thu-Trang; Hoang, Anh-Tuan; Nguyen, Toan-Thang; Tran, Minh-Tien

    2018-03-01

    We study the half-filled mass-imbalanced Hubbard model with spatially alternating interactions on an optical bipartite lattice by means of the dynamical mean-field theory. The Mott transition is investigated via the spin-dependent density of states and double occupancies. The phase diagrams for the homogeneous phases at zero temperature are constructed numerically. The boundary between metallic and insulating phases at zero temperature is analytically derived within the dynamical mean field theory using the equation of motion approach as the impurity solver. We found that the metallic region is reduced with increasing interaction anisotropy or mass imbalance. Our results are closely relevant to current researches in ultracold fermion experiments and can be verified through experimental observations.

  6. Power-law weighted networks from local attachments

    NASA Astrophysics Data System (ADS)

    Moriano, P.; Finke, J.

    2012-07-01

    This letter introduces a mechanism for constructing, through a process of distributed decision-making, substrates for the study of collective dynamics on extended power-law weighted networks with both a desired scaling exponent and a fixed clustering coefficient. The analytical results show that the connectivity distribution converges to the scaling behavior often found in social and engineering systems. To illustrate the approach of the proposed framework we generate network substrates that resemble steady state properties of the empirical citation distributions of i) publications indexed by the Institute for Scientific Information from 1981 to 1997; ii) patents granted by the U.S. Patent and Trademark Office from 1975 to 1999; and iii) opinions written by the Supreme Court and the cases they cite from 1754 to 2002.

  7. Multiparticle instability in a spin-imbalanced Fermi gas

    NASA Astrophysics Data System (ADS)

    Whitehead, T. M.; Conduit, G. J.

    2018-01-01

    Weak attractive interactions in a spin-imbalanced Fermi gas induce a multiparticle instability, binding multiple fermions together. The maximum binding energy per particle is achieved when the ratio of the number of up- and down-spin particles in the instability is equal to the ratio of the up- and down-spin densities of states in momentum at the Fermi surfaces, to utilize the variational freedom of all available momentum states. We derive this result using an analytical approach, and verify it using exact diagonalization. The multiparticle instability extends the Cooper pairing instability of balanced Fermi gases to the imbalanced case, and could form the basis of a many-body state, analogously to the construction of the Bardeen-Cooper-Schrieffer theory of superconductivity out of Cooper pairs.

  8. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A pertinent approach to solve nonlinear fuzzy integro-differential equations.

    PubMed

    Narayanamoorthy, S; Sathiyapriya, S P

    2016-01-01

    Fuzzy integro-differential equations is one of the important parts of fuzzy analysis theory that holds theoretical as well as applicable values in analytical dynamics and so an appropriate computational algorithm to solve them is in essence. In this article, we use parametric forms of fuzzy numbers and suggest an applicable approach for solving nonlinear fuzzy integro-differential equations using homotopy perturbation method. A clear and detailed description of the proposed method is provided. Our main objective is to illustrate that the construction of appropriate convex homotopy in a proper way leads to highly accurate solutions with less computational work. The efficiency of the approximation technique is expressed via stability and convergence analysis so as to guarantee the efficiency and performance of the methodology. Numerical examples are demonstrated to verify the convergence and it reveals the validity of the presented numerical technique. Numerical results are tabulated and examined by comparing the obtained approximate solutions with the known exact solutions. Graphical representations of the exact and acquired approximate fuzzy solutions clarify the accuracy of the approach.

  10. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  11. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  12. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  13. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  14. The Joint Construction of Meaning in Writing Conferences

    ERIC Educational Resources Information Center

    Haneda, Mari

    2004-01-01

    Using an analytical framework that combines insights from a variety of previous studies, the current paper aims to contribute to the description of the joint construction of meaning in pedagogical discourse, in particular in one-on-one teacher-student interaction in writing conferences in a Japanese-as-a-foreign language (JFL) class. The argument…

  15. An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.

    ERIC Educational Resources Information Center

    Skakun, Ernest N.; Hakstian, A. Ralph

    Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…

  16. HyperText MARCup: A Conceptualization for Encoding, De-Constructing, Searching, Retrieving, and Using Traditional Knowledge Tools.

    ERIC Educational Resources Information Center

    Wall, C. Edward; And Others

    1995-01-01

    Discusses the integration of Standard General Markup Language, Hypertext Markup Language, and MARC format to parse classified analytical bibliographies. Use of the resulting electronic knowledge constructs in local library systems as maps of a specified subset of resources is discussed, and an example is included. (LRW)

  17. Construct Validity Examination of Critical Thinking Dispositions for Undergraduate Students in University Putra Malaysia

    ERIC Educational Resources Information Center

    Ghadi, Ibrahim; Alwi, Nor Hayati; Bakar, Kamariah Abu; Talib, Othman

    2012-01-01

    This research aims to evaluate the psychology properties of the construct validity for the Critical Thinking Disposition (CTD) instrument. The CTD instrument consists of 39 Likert-type items measuring seven dispositions, namely analyticity, open-mind, truth-seeking, systematicity, self-confidence inquisitiveness and maturity. The study involves…

  18. Towards an Analytical Framework for Understanding the Development of a Quality Assurance System in an International Joint Programme

    ERIC Educational Resources Information Center

    Zheng, Gaoming; Cai, Yuzhuo; Ma, Shaozhuang

    2017-01-01

    This paper intends to construct an analytical framework for understanding quality assurance in international joint programmes and to test it in a case analysis of a European--Chinese joint doctoral degree programme. The development of a quality assurance system for an international joint programme is understood as an institutionalization process…

  19. A Meta-Analytic Review of the Role of Child Anxiety Sensitivity in Child Anxiety

    ERIC Educational Resources Information Center

    Noel, Valerie A.; Francis, Sarah E.

    2011-01-01

    Conflicting findings exist regarding (1) whether anxiety sensitivity (AS) is a construct distinct from anxiety in children and (2) the specific nature of the role of AS in child anxiety. This study uses meta-analytic techniques to (1) determine whether youth (ages 6-18 years) have been reported to experience AS, (2) examine whether AS…

  20. Building America House Simulation Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendron, Robert; Engebrecht, Cheryn

    2010-09-01

    The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binotti, M.; Zhu, G.; Gray, A.

    An analytical approach, as an extension of one newly developed method -- First-principle OPTical Intercept Calculation (FirstOPTIC) -- is proposed to treat the geometrical impact of three-dimensional (3-D) effects on parabolic trough optical performance. The mathematical steps of this analytical approach are presented and implemented numerically as part of the suite of FirstOPTIC code. In addition, the new code has been carefully validated against ray-tracing simulation results and available numerical solutions. This new analytical approach to treating 3-D effects will facilitate further understanding and analysis of the optical performance of trough collectors as a function of incidence angle.

  2. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  3. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma M.; Hendrix, Val; Chertkov, Michael

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less

  4. Unifying Approach to Analytical Chemistry and Chemical Analysis: Problem-Oriented Role of Chemical Analysis.

    ERIC Educational Resources Information Center

    Pardue, Harry L.; Woo, Jannie

    1984-01-01

    Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)

  5. Complete characterization of fourth-order symplectic integrators with extended-linear coefficients.

    PubMed

    Chin, Siu A

    2006-02-01

    The structure of symplectic integrators up to fourth order can be completely and analytically understood when the factorization (split) coefficients are related linearly but with a uniform nonlinear proportional factor. The analytic form of these extended-linear symplectic integrators greatly simplified proofs of their general properties and allowed easy construction of both forward and nonforward fourth-order algorithms with an arbitrary number of operators. Most fourth-order forward integrators can now be derived analytically from this extended-linear formulation without the use of symbolic algebra.

  6. Experimental, Numerical and Analytical Characterization of Slosh Dynamics Applied to In-Space Propellant Storage, Management and Transfer

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Gutierrez, Hector; Marsell, Brandon; Schallhorn, Paul; Lapilli, Gabriel D.

    2015-01-01

    Experimental and numerical results are presented from a new cryogenic fluid slosh program at the Florida Institute of Technology (FIT). Water and cryogenic liquid nitrogen are used in various ground-based tests with an approximately 30 cm diameter spherical tank to characterize damping, slosh mode frequencies, and slosh forces. The experimental results are compared to a computational fluid dynamics (CFD) model for validation. An analytical model is constructed from prior work for comparison. Good agreement is seen between experimental, numerical, and analytical results.

  7. Animal Construction as a Free Boundary Problem: Evidence of Fractal Scaling Laws

    NASA Astrophysics Data System (ADS)

    Nicolis, S. C.

    2014-12-01

    We suggest that the main features of animal construction can be understood as the sum of locally independent actions of non-interacting individuals subjected to the global constraints imposed by the nascent structure. We first formulate an analytically tractable oscopic description of construction which predicts a 1/3 power law for how the length of the structure grows with time. We further show how the power law is modified when biases in random walk performed by the constructors as well as halting times between consecutive construction steps are included.

  8. Cocontraction of pairs of antagonistic muscles: analytical solution for planar static nonlinear optimization approaches.

    PubMed

    Herzog, W; Binding, P

    1993-11-01

    It has been stated in the literature that static, nonlinear optimization approaches cannot predict coactivation of pairs of antagonistic muscles; however, numerical solutions of such approaches have predicted coactivation of pairs of one-joint and multijoint antagonists. Analytical support for either finding is not available in the literature for systems containing more than one degree of freedom. The purpose of this study was to investigate analytically the possibility of cocontraction of pairs of antagonistic muscles using a static nonlinear optimization approach for a multidegree-of-freedom, two-dimensional system. Analytical solutions were found using the Karush-Kuhn-Tucker conditions, which were necessary and sufficient for optimality in this problem. The results show that cocontraction of pairs of one-joint antagonistic muscles is not possible, whereas cocontraction of pairs of multijoint antagonists is. These findings suggest that cocontraction of pairs of antagonistic muscles may be an "efficient" way to accomplish many movement tasks.

  9. Constructing core competency indicators for clinical teachers in Taiwan: a qualitative analysis and an analytic hierarchy process

    PubMed Central

    2014-01-01

    Background The objective of this study was to construct a framework of core competency indicators of medical doctors who teach in the clinical setting in Taiwan and to evaluate the relative importance of the indicators among these clinical teachers. Methods The preliminary framework of the indicators was developed from an in-depth interview conducted with 12 clinical teachers who had previously been recognized and awarded for their teaching excellence in university hospitals. The framework was categorized into 4 dimensions: 1) Expertise (i.e., professional knowledge and skill); 2) Teaching Ability; 3) Attitudes and Traits; and 4) Beliefs and Values. These areas were further divided into 11 sub-dimensions and 40 indicators. Subsequently, a questionnaire built upon this qualitative analysis was distributed to another group of 17 clinical teachers. Saaty’s eigenvector approach, or the so-called analytic hierarchy process (AHP), was applied to perform the pairwise comparisons between indicators and to determine the ranking and relative importance of the indicators. Results Fourteen questionnaires were deemed valid for AHP assessment due to completeness of data input. The relative contribution of the four main dimensions was 31% for Attitudes and Traits, 30% for Beliefs and Values, 22% for Expertise, and 17% for Teaching Ability. Specifically, 9 out of the 10 top-ranked indicators belonged to the “Attitudes and Traits” or “Beliefs and Values” dimensions, indicating that inner characteristics (i.e., attitudes, traits, beliefs, and values) were perceived as more important than surface ones (i.e., professional knowledge, skills, and teaching competency). Conclusion We performed a qualitative analysis and developed a questionnaire based upon an interview with experienced clinical teachers in Taiwan, and used this tool to construct the key features for the role model. The application has also demonstrated the relative importance in the dimensions of the core competencies for clinical teachers in Taiwan. PMID:24726054

  10. Constructing core competency indicators for clinical teachers in Taiwan: a qualitative analysis and an analytic hierarchy process.

    PubMed

    Li, Ai-Tzu; Lin, Jou-Wei

    2014-04-11

    The objective of this study was to construct a framework of core competency indicators of medical doctors who teach in the clinical setting in Taiwan and to evaluate the relative importance of the indicators among these clinical teachers. The preliminary framework of the indicators was developed from an in-depth interview conducted with 12 clinical teachers who had previously been recognized and awarded for their teaching excellence in university hospitals. The framework was categorized into 4 dimensions: 1) Expertise (i.e., professional knowledge and skill); 2) Teaching Ability; 3) Attitudes and Traits; and 4) Beliefs and Values. These areas were further divided into 11 sub-dimensions and 40 indicators. Subsequently, a questionnaire built upon this qualitative analysis was distributed to another group of 17 clinical teachers. Saaty's eigenvector approach, or the so-called analytic hierarchy process (AHP), was applied to perform the pairwise comparisons between indicators and to determine the ranking and relative importance of the indicators. Fourteen questionnaires were deemed valid for AHP assessment due to completeness of data input. The relative contribution of the four main dimensions was 31% for Attitudes and Traits, 30% for Beliefs and Values, 22% for Expertise, and 17% for Teaching Ability. Specifically, 9 out of the 10 top-ranked indicators belonged to the "Attitudes and Traits" or "Beliefs and Values" dimensions, indicating that inner characteristics (i.e., attitudes, traits, beliefs, and values) were perceived as more important than surface ones (i.e., professional knowledge, skills, and teaching competency). We performed a qualitative analysis and developed a questionnaire based upon an interview with experienced clinical teachers in Taiwan, and used this tool to construct the key features for the role model. The application has also demonstrated the relative importance in the dimensions of the core competencies for clinical teachers in Taiwan.

  11. Analytic Steering: Inserting Context into the Information Dialog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohn, Shawn J.; Calapristi, Augustin J.; Brown, Shyretha D.

    2011-10-23

    An analyst’s intrinsic domain knowledge is a primary asset in almost any analysis task. Unstructured text analysis systems that apply un-supervised content analysis approaches can be more effective if they can leverage this domain knowledge in a manner that augments the information discovery process without obfuscating new or unexpected content. Current unsupervised approaches rely upon the prowess of the analyst to submit the right queries or observe generalized document and term relationships from ranked or visual results. We propose a new approach which allows the user to control or steer the analytic view within the unsupervised space. This process ismore » controlled through the data characterization process via user supplied context in the form of a collection of key terms. We show that steering with an appropriate choice of key terms can provide better relevance to the analytic domain and still enable the analyst to uncover un-expected relationships; this paper discusses cases where various analytic steering approaches can provide enhanced analysis results and cases where analytic steering can have a negative impact on the analysis process.« less

  12. Accurate inspiral-merger-ringdown gravitational waveforms for nonspinning black-hole binaries including the effect of subdominant modes

    NASA Astrophysics Data System (ADS)

    Mehta, Ajit Kumar; Mishra, Chandra Kant; Varma, Vijay; Ajith, Parameswaran

    2017-12-01

    We present an analytical waveform family describing gravitational waves (GWs) from the inspiral, merger, and ringdown of nonspinning black-hole binaries including the effect of several nonquadrupole modes [(ℓ=2 ,m =±1 ),(ℓ=3 ,m =±3 ),(ℓ=4 ,m =±4 ) apart from (ℓ=2 ,m =±2 )]. We first construct spin-weighted spherical harmonics modes of hybrid waveforms by matching numerical-relativity simulations (with mass ratio 1-10) describing the late inspiral, merger, and ringdown of the binary with post-Newtonian/effective-one-body waveforms describing the early inspiral. An analytical waveform family is constructed in frequency domain by modeling the Fourier transform of the hybrid waveforms making use of analytical functions inspired by perturbative calculations. The resulting highly accurate, ready-to-use waveforms are highly faithful (unfaithfulness ≃10-4- 10-2 ) for observation of GWs from nonspinning black-hole binaries and are extremely inexpensive to generate.

  13. Construction and electrochemical characterization of microelectrodes for improved sensitivity in paper-based analytical devices

    PubMed Central

    Santhiago, Murilo; Wydallis, John B.; Kubota, Lauro T.; Henry, Charles S.

    2013-01-01

    This work presents a simple, low cost method for creating microelectrodes for electrochemical paper-based analytical devices (ePADs). The microelectrodes were constructed by backfilling small holes made in polyester sheets using a CO2 laser etching system. To make electrical connections, the working electrodes were combined with silver screen-printed paper in a sandwich type two-electrode configuration. The devices were characterized using linear sweep voltammetry and the results are in good agreement with theoretical predictions for electrode size and shape. As a proof-of-concept, cysteine was measured using cobalt phthalocyanine as a redox mediator. The rate constant (kobs) for the chemical reaction between cysteine and the redox mediator was obtained by chronoamperometry and found to be on the order of 105 s−1 M−1. Using a microelectrode array, it was possible to reach a limit of detection of 4.8 μM for cysteine. The results show that carbon paste microelectrodes can be easily integrated with paper-based analytical devices. PMID:23581428

  14. Construction and electrochemical characterization of microelectrodes for improved sensitivity in paper-based analytical devices.

    PubMed

    Santhiago, Murilo; Wydallis, John B; Kubota, Lauro T; Henry, Charles S

    2013-05-21

    This work presents a simple, low cost method for creating microelectrodes for electrochemical paper-based analytical devices (ePADs). The microelectrodes were constructed by backfilling small holes made in polyester sheets using a CO2 laser etching system. To make electrical connections, the working electrodes were combined with silver screen-printed paper in a sandwich type two-electrode configuration. The devices were characterized using linear sweep voltammetry, and the results are in good agreement with theoretical predictions for electrode size and shape. As a proof-of-concept, cysteine was measured using cobalt phthalocyanine as a redox mediator. The rate constant (k(obs)) for the chemical reaction between cysteine and the redox mediator was obtained by chronoamperometry and found to be on the order of 10(5) s(-1) M(-1). Using a microelectrode array, it was possible to reach a limit of detection of 4.8 μM for cysteine. The results show that carbon paste microelectrodes can be easily integrated with paper-based analytical devices.

  15. Rapid convergence of optimal control in NMR using numerically-constructed toggling frames

    NASA Astrophysics Data System (ADS)

    Coote, Paul; Anklin, Clemens; Massefski, Walter; Wagner, Gerhard; Arthanari, Haribabu

    2017-08-01

    We present a numerical method for rapidly solving the Bloch equation for an arbitrary time-varying spin-1/2 Hamiltonian. The method relies on fast, vectorized computations such as summation and quaternion multiplication, rather than slow computations such as matrix exponentiation. A toggling frame is constructed in which the Hamiltonian is time-invariant, and therefore has a simple analytical solution. The key insight is that constructing this frame is faster than solving the system dynamics in the original frame. Rapidly solving the Bloch equations for an arbitrary Hamiltonian is particularly useful in the context of NMR optimal control. Optimal control theory can be used to design pulse shapes for a range of tasks in NMR spectroscopy. However, it requires multiple simulations of the Bloch equations at each stage of the algorithm, and for each relevant set of parameters (e.g. chemical shift frequencies). This is typically time consuming. We demonstrate that by working in an appropriate toggling frame, optimal control pulses can be generated much faster. We present a new alternative to the well-known GRAPE algorithm to continuously update the toggling-frame as the optimal pulse is generated, and demonstrate that this approach is extremely fast. The use and benefit of rapid optimal pulse generation is demonstrated for 19F fragment screening experiments.

  16. Compact and cost effective instrument for detecting drug precursors in different environments based on fluorescence polarization

    NASA Astrophysics Data System (ADS)

    Antolín-Urbaneja, J. C.; Eguizabal, I.; Briz, N.; Dominguez, A.; Estensoro, P.; Secchi, A.; Varriale, A.; Di Giovanni, S.; D'Auria, S.

    2013-05-01

    Several techniques for detecting chemical drug precursors have been developed in the last decade. Most of them are able to identify molecules at very low concentration under lab conditions. Other commercial devices are able to detect a fixed number and type of target substances based on a single detection technique providing an absence of flexibility with respect to target compounds. The construction of compact and easy to use detection systems providing screening for a large number of compounds being able to discriminate them with low false alarm rate and high probability of detection is still an open concern. Under CUSTOM project, funded by the European Commission within the FP7, a stand-alone portable sensing device based on multiple techniques is being developed. One of these techniques is based on the LED induced fluorescence polarization to detect Ephedrine and Benzyl Methyl Keton (BMK) as a first approach. This technique is highly selective with respect to the target compounds due to the generation of properly engineered fluorescent proteins which are able to bind the target analytes, as it happens in an "immune-type reaction". This paper deals with the advances in the design, construction and validation of the LED induced fluorescence sensor to detect BMK analytes. This sensor includes an analysis module based on high performance LED and PMT detector, a fluidic system to dose suitable quantities of reagents and some printed circuit boards, all of them fixed in a small structure (167mm × 193mm × 228mm) with the capability of working as a stand-alone application.

  17. Compilation of a near-infrared library for the construction of quantitative models of amoxicillin and potassium clavulanate oral dosage forms

    NASA Astrophysics Data System (ADS)

    Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin

    2018-05-01

    The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.

  18. Defeat and entrapment: more than meets the eye? Applying network analysis to estimate dimensions of highly correlated constructs.

    PubMed

    Forkmann, Thomas; Teismann, Tobias; Stenzel, Jana-Sophie; Glaesmer, Heide; de Beurs, Derek

    2018-01-25

    Defeat and entrapment have been shown to be of central relevance to the development of different disorders. However, it remains unclear whether they represent two distinct constructs or one overall latent variable. One reason for the unclarity is that traditional factor analytic techniques have trouble estimating the right number of clusters in highly correlated data. In this study, we applied a novel approach based on network analysis that can deal with correlated data to establish whether defeat and entrapment are best thought of as one or multiple constructs. Explanatory graph analysis was used to estimate the number of dimensions within the 32 items that make up the defeat and entrapment scales in two samples: an online community sample of 480 participants, and a clinical sample of 147 inpatients admitted to a psychiatric hospital after a suicidal attempt or severe suicidal crisis. Confirmatory Factor analysis (CFA) was used to test whether the proposed structure fits the data. In both samples, bootstrapped exploratory graph analysis suggested that the defeat and entrapment items belonged to different dimensions. Within the entrapment items, two separate dimensions were detected, labelled internal and external entrapment. Defeat appeared to be multifaceted only in the online sample. When comparing the CFA outcomes of the one, two, three and four factor models, the one factor model was preferred. Defeat and entrapment can be viewed as distinct, yet, highly associated constructs. Thus, although replication is needed, results are in line with theories differentiating between these two constructs.

  19. Grade 8 students' capability of analytical thinking and attitude toward science through teaching and learning about soil and its' pollution based on science technology and society (STS) approach

    NASA Astrophysics Data System (ADS)

    Boonprasert, Lapisarin; Tupsai, Jiraporn; Yuenyong, Chokchai

    2018-01-01

    This study reported Grade 8 students' analytical thinking and attitude toward science in teaching and learning about soil and its' pollution through science technology and society (STS) approach. The participants were 36 Grade 8 students in Naklang, Nongbualumphu, Thailand. The teaching and learning about soil and its' pollution through STS approach had carried out for 6 weeks. The soil and its' pollution unit through STS approach was developed based on framework of Yuenyong (2006) that consisted of five stages including (1) identification of social issues, (2) identification of potential solutions, (3) need for knowledge, (4) decision-making, and (5) socialization stage. Students' analytical thinking and attitude toward science was collected during their learning by participant observation, analytical thinking test, students' tasks, and journal writing. The findings revealed that students could gain their capability of analytical thinking. They could give ideas or behave the characteristics of analytical thinking such as thinking for classifying, compare and contrast, reasoning, interpreting, collecting data and decision making. Students' journal writing reflected that the STS class of soil and its' pollution motivated students. The paper will discuss implications of these for science teaching and learning through STS in Thailand.

  20. Solutions of conformal Israel-Stewart relativistic viscous fluid dynamics

    NASA Astrophysics Data System (ADS)

    Marrochio, Hugo; Noronha, Jorge; Denicol, Gabriel S.; Luzum, Matthew; Jeon, Sangyong; Gale, Charles

    2015-01-01

    We use symmetry arguments developed by Gubser to construct the first radially expanding explicit solutions of the Israel-Stewart formulation of hydrodynamics. Along with a general semi-analytical solution, an exact analytical solution is given which is valid in the cold plasma limit where viscous effects from shear viscosity and the relaxation time coefficient are important. The radially expanding solutions presented in this paper can be used as nontrivial checks of numerical algorithms employed in hydrodynamic simulations of the quark-gluon plasma formed in ultrarelativistic heavy ion collisions. We show this explicitly by comparing such analytic and semi-analytic solutions with the corresponding numerical solutions obtained using the music viscous hydrodynamics simulation code.

  1. Nanotechnology in glucose monitoring: advances and challenges in the last 10 years.

    PubMed

    Scognamiglio, Viviana

    2013-09-15

    In the last decades, a wide multitude of research activity has been focused on the development of biosensors for glucose monitoring, devoted to overcome the challenges associated with smart analytical performances with commercial implications. Crucial issues still nowadays elude biosensors to enter the market, such as sensitivity, stability, miniaturisation, continuous and in situ monitoring in a complex matrix. A noteworthy tendency of biosensor technology is likely to push towards nanotechnology, which allows to reduce dimensions at the nanoscale, consenting the construction of arrays for high throughput analysis with the integration of microfluidics, and enhancing the performance of the biological components by using new nanomaterials. This review aims to highlight current trends in biosensors for glucose monitoring based on nanotechnology, reporting widespread representative examples of the recent approaches for nanobiosensors over the past 10 years. Progress in nanotechnology for the development of biosensing systems for blood glucose level monitoring will be discussed, in view of their design and construction on the bases of the new materials offered by nanotechnology. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Chemical-Specific Representation of Air-Soil Exchange and Soil Penetration in Regional Multimedia Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, T.E.; Bennett, D.H.

    2002-08-01

    In multimedia mass-balance models, the soil compartment is an important sink as well as a conduit for transfers to vegetation and shallow groundwater. Here a novel approach for constructing soil transport algorithms for multimedia fate models is developed and evaluated. The resulting algorithms account for diffusion in gas and liquid components; advection in gas, liquid, or solid phases; and multiple transformation processes. They also provide an explicit quantification of the characteristic soil penetration depth. We construct a compartment model using three and four soil layers to replicate with high reliability the flux and mass distribution obtained from the exact analyticalmore » solution describing the transient dispersion, advection, and transformation of chemicals in soil with fixed properties and boundary conditions. Unlike the analytical solution, which requires fixed boundary conditions, the soil compartment algorithms can be dynamically linked to other compartments (air, vegetation, ground water, surface water) in multimedia fate models. We demonstrate and evaluate the performance of the algorithms in a model with applications to benzene, benzo(a)pyrene, MTBE, TCDD, and tritium.« less

  3. Development of moving spars for active aeroelastic structures

    NASA Astrophysics Data System (ADS)

    Amprikidis, Michael; Cooper, Jonathan E.

    2003-08-01

    This paper describes a research program investigating the development of "moving spars" to enable active aeroelastic control of aerospace structures. A number of different concepts have been considered as part of the EU funded Active Aeroelastic Aircraft Structures (3AS) project that enable the control of the bending and torsional stiffness of aircraft wings through changes in the internal aircraft structure. The aeroelastic behaviour, in particular static deflections, can be controlled as desired through changes in the position, orientation and stiffness of the spars. The concept described in this paper is based upon translational movement of the spars. This will result in changes in the torsional stiffness and shear centre position whilst leaving the bending stiffness unaffected. An analytical study of the aeroelastic behaviour demonstrates the benefits of using such an approach. An experimental investigation involving construction and bench testing of the concepts was undertaken to demonstrate its feasibility. Finally, a wind tunnel test of simple wing models constructed using these concepts was performed. The simulated and experimental results show that it is possible to control the wind twist in practice.

  4. Amorphous topological insulators constructed from random point sets

    NASA Astrophysics Data System (ADS)

    Mitchell, Noah P.; Nash, Lisa M.; Hexner, Daniel; Turner, Ari M.; Irvine, William T. M.

    2018-04-01

    The discovery that the band structure of electronic insulators may be topologically non-trivial has revealed distinct phases of electronic matter with novel properties1,2. Recently, mechanical lattices have been found to have similarly rich structure in their phononic excitations3,4, giving rise to protected unidirectional edge modes5-7. In all of these cases, however, as well as in other topological metamaterials3,8, the underlying structure was finely tuned, be it through periodicity, quasi-periodicity or isostaticity. Here we show that amorphous Chern insulators can be readily constructed from arbitrary underlying structures, including hyperuniform, jammed, quasi-crystalline and uniformly random point sets. While our findings apply to mechanical and electronic systems alike, we focus on networks of interacting gyroscopes as a model system. Local decorations control the topology of the vibrational spectrum, endowing amorphous structures with protected edge modes—with a chirality of choice. Using a real-space generalization of the Chern number, we investigate the topology of our structures numerically, analytically and experimentally. The robustness of our approach enables the topological design and self-assembly of non-crystalline topological metamaterials on the micro and macro scale.

  5. Technosocial Modeling of IED Threat Scenarios and Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Brothers, Alan J.; Coles, Garill A.

    2009-03-23

    This paper describes an approach for integrating sociological and technical models to develop more complete threat assessment. Current approaches to analyzing and addressing threats tend to focus on the technical factors. This paper addresses development of predictive models that encompass behavioral as well as these technical factors. Using improvised explosive device (IED) attacks as motivation, this model supports identification of intervention activities 'left of boom' as well as prioritizing attack modalities. We show how Bayes nets integrate social factors associated with IED attacks into general threat model containing technical and organizational steps from planning through obtaining the IED to initiationmore » of the attack. The social models are computationally-based representations of relevant social science literature that describes human decision making and physical factors. When combined with technical models, the resulting model provides improved knowledge integration into threat assessment for monitoring. This paper discusses the construction of IED threat scenarios, integration of diverse factors into an analytical framework for threat assessment, indicator identification for future threats, and future research directions.« less

  6. Generation of 2N + 1-scroll existence in new three-dimensional chaos systems.

    PubMed

    Liu, Yue; Guan, Jian; Ma, Chunyang; Guo, Shuxu

    2016-08-01

    We propose a systematic methodology for creating 2N + 1-scroll chaotic attractors from a simple three-dimensional system, which is named as the translation chaotic system. It satisfies the condition a12a21 = 0, while the Chua system satisfies a12a21 > 0. In this paper, we also propose a successful (an effective) design and an analytical approach for constructing 2N + 1-scrolls, the translation transformation principle. Also, the dynamics properties of the system are studied in detail. MATLAB simulation results show very sophisticated dynamical behaviors and unique chaotic behaviors of the system. It provides a new approach for 2N + 1-scroll attractors. Finally, to explore the potential use in technological applications, a novel block circuit diagram is also designed for the hardware implementation of 1-, 3-, 5-, and 7-scroll attractors via switching the switches. Translation chaotic system has the merit of convenience and high sensitivity to initial values, emerging potentials in future engineering chaos design.

  7. Process-based network decomposition reveals backbone motif structure

    PubMed Central

    Wang, Guanyu; Du, Chenghang; Chen, Hao; Simha, Rahul; Rong, Yongwu; Xiao, Yi; Zeng, Chen

    2010-01-01

    A central challenge in systems biology today is to understand the network of interactions among biomolecules and, especially, the organizing principles underlying such networks. Recent analysis of known networks has identified small motifs that occur ubiquitously, suggesting that larger networks might be constructed in the manner of electronic circuits by assembling groups of these smaller modules. Using a unique process-based approach to analyzing such networks, we show for two cell-cycle networks that each of these networks contains a giant backbone motif spanning all the network nodes that provides the main functional response. The backbone is in fact the smallest network capable of providing the desired functionality. Furthermore, the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. The process-based approach used in the above analysis has additional benefits: It is scalable, analytic (resulting in a single analyzable expression that describes the behavior), and computationally efficient (all possible minimal networks for a biological process can be identified and enumerated). PMID:20498084

  8. Correlation analysis of targeted proteins and metabolites to assess and engineer microbial isopentenol production.

    PubMed

    George, Kevin W; Chen, Amy; Jain, Aakriti; Batth, Tanveer S; Baidoo, Edward E K; Wang, George; Adams, Paul D; Petzold, Christopher J; Keasling, Jay D; Lee, Taek Soon

    2014-08-01

    The ability to rapidly assess and optimize heterologous pathway function is critical for effective metabolic engineering. Here, we develop a systematic approach to pathway analysis based on correlations between targeted proteins and metabolites and apply it to the microbial production of isopentenol, a promising biofuel. Starting with a seven-gene pathway, we performed a correlation analysis to reduce pathway complexity and identified two pathway proteins as the primary determinants of efficient isopentenol production. Aided by the targeted quantification of relevant pathway intermediates, we constructed and subsequently validated a conceptual model of isopentenol pathway function. Informed by our analysis, we assembled a strain which produced isopentenol at a titer 1.5 g/L, or 46% of theoretical yield. Our engineering approach allowed us to accurately identify bottlenecks and determine appropriate pathway balance. Paired with high-throughput cloning techniques and analytics, this strategy should prove useful for the analysis and optimization of increasingly complex heterologous pathways. © 2014 Wiley Periodicals, Inc.

  9. Universality in survivor distributions: Characterizing the winners of competitive dynamics

    NASA Astrophysics Data System (ADS)

    Luck, J. M.; Mehta, A.

    2015-11-01

    We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.

  10. The Peabody Treatment Progress Battery: history and methods for developing a comprehensive measurement battery for youth mental health.

    PubMed

    Riemer, Manuel; Athay, M Michele; Bickman, Leonard; Breda, Carolyn; Kelley, Susan Douglas; Vides de Andrade, Ana R

    2012-03-01

    There is increased need for comprehensive, flexible, and evidence-based approaches to measuring the process and outcomes of youth mental health treatment. This paper introduces a special issue dedicated to the Peabody Treatment Progress Battery (PTPB), a battery of measures created to meet this need. The PTPB is an integrated set of brief, reliable, and valid instruments that can be administered efficiently at low cost and can provide systematic feedback for use in treatment planning. It includes eleven measures completed by youth, caregivers, and/or clinicians that assess clinically-relevant constructs such as symptom severity, therapeutic alliance, life satisfaction, motivation for treatment, hope, treatment expectations, caregiver strain, and service satisfaction. This introductory article describes the rationale for the PTPB and its development and evaluation, detailing the specific analytic approaches utilized by the different papers in the special issue and a description of the study and samples from which the participants were taken.

  11. P3: a practice focused learning environment

    NASA Astrophysics Data System (ADS)

    Irving, Paul W.; Obsniuk, Michael J.; Caballero, Marcos D.

    2017-09-01

    There has been an increased focus on the integration of practices into physics curricula, with a particular emphasis on integrating computation into the undergraduate curriculum of scientists and engineers. In this paper, we present a university-level, introductory physics course for science and engineering majors at Michigan State University called P3 (projects and practices in physics) that is centred around providing introductory physics students with the opportunity to appropriate various science and engineering practices. The P3 design integrates computation with analytical problem solving and is built upon a curriculum foundation of problem-based learning, the principles of constructive alignment and the theoretical framework of community of practice. The design includes an innovative approach to computational physics instruction, instructional scaffolds, and a unique approach to assessment that enables instructors to guide students in the development of the practices of a physicist. We present the very positive student related outcomes of the design gathered via attitudinal and conceptual inventories and research interviews of students’ reflecting on their experiences in the P3 classroom.

  12. Magnetic exchange couplings from constrained density functional theory: an efficient approach utilizing analytic derivatives.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2011-11-14

    We introduce a method for evaluating magnetic exchange couplings based on the constrained density functional theory (C-DFT) approach of Rudra, Wu, and Van Voorhis [J. Chem. Phys. 124, 024103 (2006)]. Our method shares the same physical principles as C-DFT but makes use of the fact that the electronic energy changes quadratically and bilinearly with respect to the constraints in the range of interest. This allows us to use coupled perturbed Kohn-Sham spin density functional theory to determine approximately the corrections to the energy of the different spin configurations and construct a priori the relevant energy-landscapes obtained by constrained spin density functional theory. We assess this methodology in a set of binuclear transition-metal complexes and show that it reproduces very closely the results of C-DFT. This demonstrates a proof-of-concept for this method as a potential tool for studying a number of other molecular phenomena. Additionally, routes to improving upon the limitations of this method are discussed. © 2011 American Institute of Physics

  13. Construction of avulsion potential zone model for Kulik River of Barind Tract, India and Bangladesh.

    PubMed

    Sarkar, Debabrata; Pal, Swades

    2018-04-21

    Avulsion is a natural fluvial process but considered it as a hazard in the populated region due to the chance of immense failure of lives and properties. So, early warning indicates that the zone of avulsion can facilitate the people living there. About 317 numbers of local and regional historical imprints of channel cutoff along river Kulik claim the need of this work. The present study tried to identify avulsion potential zone (APZ) of Kulik river of Indo-Bangladesh using multi-parametric weighted combination approach. Analytic hierarchy approach (AHP) is applied for weighting the used parameters. Avulsion potential model clearly exhibits that 9.51-km stream segment of middle and lower catchment is highly susceptible for avulsion especially during sudden high discharge and earthquake incidents. There is also high chance of channel avulsion following the existing Paleo-avulsion courses and left channels. Hard points can also be erected alongside the main channel for resisting channel avulsion propensity.

  14. A set of parallel, implicit methods for a reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids

    DOE PAGES

    Xia, Yidong; Luo, Hong; Frisbey, Megan; ...

    2014-07-01

    A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less

  15. Advance Directive in End of Life Decision-Making among the Yoruba of South-Western Nigeria

    PubMed Central

    Jegede, Ayodele Samuel; Adegoke, Olufunke Olufunsho

    2017-01-01

    End-of-life decision making is value-laden within the context of culture and bioethics. Also, ethics committee role is difficult to understand on this, thus need for ethnomethodological perspective in an expanding bioethical age. Anthropological approach was utilized to document Yoruba definition and perspective of death, cultural beliefs about end-of-life decision making, factors influencing it and ethics committee role. Interviews were conducted among selected Yoruba resident in Akinyele LGA, Oyo State, Nigeria. Content analytical approach was used for data analysis. Yoruba culture, death is socially constructed having spiritual, physical and social significance. Relationship between the dying and significant others influences decision making. Hierarchy of authority informs implementing traditional advance directive. Socialization, gender, patriarchy, religious belief and tradition are major considerations in end-of-life decision making. Awareness, resource allocation and advocacy are important ethics committees’ roles. Further research into cultural diversity of end-of-life decision making will strengthen ethical practice in health care delivery. PMID:28344984

  16. Advance Directive in End of Life Decision-Making among the Yoruba of South-Western Nigeria.

    PubMed

    Jegede, Ayodele Samuel; Adegoke, Olufunke Olufunsho

    2016-11-01

    End-of-life decision making is value-laden within the context of culture and bioethics. Also, ethics committee role is difficult to understand on this, thus need for ethnomethodological perspective in an expanding bioethical age. Anthropological approach was utilized to document Yoruba definition and perspective of death, cultural beliefs about end-of-life decision making, factors influencing it and ethics committee role. Interviews were conducted among selected Yoruba resident in Akinyele LGA, Oyo State, Nigeria. Content analytical approach was used for data analysis. Yoruba culture, death is socially constructed having spiritual, physical and social significance. Relationship between the dying and significant others influences decision making. Hierarchy of authority informs implementing traditional advance directive. Socialization, gender, patriarchy, religious belief and tradition are major considerations in end-of-life decision making. Awareness, resource allocation and advocacy are important ethics committees' roles. Further research into cultural diversity of end-of-life decision making will strengthen ethical practice in health care delivery.

  17. Solving an inverse eigenvalue problem with triple constraints on eigenvalues, singular values, and diagonal elements

    NASA Astrophysics Data System (ADS)

    Wu, Sheng-Jhih; Chu, Moody T.

    2017-08-01

    An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.

  18. Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia

    A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.

  19. The Interactional Construction of Identity: An Adolescent with Autism in Interaction with Peers

    ERIC Educational Resources Information Center

    Bottema-Beutel, Kristen; Smith, Nevin

    2013-01-01

    Using discourse analytic methodology, this study examines video data collected during a social group intervention designed to promote engagement between teens with autism spectrum disorders (ASDs) and their peers. The analysis focuses on the interactive means by which the participants construct the identity of the group member with an ASD,…

  20. By the Book: An Analysis of Adolescents with Autism Spectrum Condition Co-Constructing Fictional Narratives with Peers

    ERIC Educational Resources Information Center

    Bottema-Beutel, Kristen; White, Rachael

    2016-01-01

    In this discourse analytic study, we examine interactions between adolescents with autism spectrum condition (ASC) and their typically developing (TD) peers during the construction of fictional narratives within a group intervention context. We found participants with ASC contributed fewer narrative-related turns at talk than TD participants. The…

  1. An Updated Typology of Causative Constructions: Form-Function Mappings in Hupa (California Athabaskan), Chungli Ao (Tibeto-Burman) and Beyond

    ERIC Educational Resources Information Center

    Escamilla, Ramon Matthew, Jr.

    2012-01-01

    Taking up analytical issues raised primarily in Dixon (2000) and Dixon & Aikhenvald (2000), this dissertation combines descriptive work with a medium-sized (50-language) typological study. Chapter 1 situates the dissertation against a concise survey of typological-functional work on causative constructions from the last few decades, and…

  2. A Meta-Analytic Examination of the Construct Validity of the Michigan Organizational Assessment Questionnaire Job Satisfaction Subscale

    ERIC Educational Resources Information Center

    Bowling, Nathan A.; Hammond, Gregory D.

    2008-01-01

    Although several different measures have been developed to assess job satisfaction, large-scale examinations of the psychometric properties of most satisfaction scales are generally lacking. In the current study we used meta-analysis to examine the construct validity of the Michigan Organizational Assessment Questionnaire Job Satisfaction Subscale…

  3. The Role of Gender in Distance Learning: A Meta-Analytic Review of Gender Differences in Academic Performance and Self-Efficacy in Distance Learning

    ERIC Educational Resources Information Center

    Perkowski, Justine

    2013-01-01

    This meta-analytic review was performed to determine the relationship between gender and two constructs measuring success in distance learning--academic performance and self-efficacy--with a particular interest in identifying whether females or males have an advantage in distance learning environments. Data from 15 studies resulted in 18 effect…

  4. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  5. Chemical imaging of drug delivery systems with structured surfaces-a combined analytical approach of confocal raman microscopy and optical profilometry.

    PubMed

    Kann, Birthe; Windbergs, Maike

    2013-04-01

    Confocal Raman microscopy is an analytical technique with a steadily increasing impact in the field of pharmaceutics as the instrumental setup allows for nondestructive visualization of component distribution within drug delivery systems. Here, the attention is mainly focused on classic solid carrier systems like tablets, pellets, or extrudates. Due to the opacity of these systems, Raman analysis is restricted either to exterior surfaces or cross sections. As Raman spectra are only recorded from one focal plane at a time, the sample is usually altered to create a smooth and even surface. However, this manipulation can lead to misinterpretation of the analytical results. Here, we present a trendsetting approach to overcome these analytical pitfalls with a combination of confocal Raman microscopy and optical profilometry. By acquiring a topography profile of the sample area of interest prior to Raman spectroscopy, the profile height information allowed to level the focal plane to the sample surface for each spectrum acquisition. We first demonstrated the basic principle of this complementary approach in a case study using a tilted silica wafer. In a second step, we successfully adapted the two techniques to investigate an extrudate and a lyophilisate as two exemplary solid drug carrier systems. Component distribution analysis with the novel analytical approach was neither hampered by the curvature of the cylindrical extrudate nor the highly structured surface of the lyophilisate. Therefore, the combined analytical approach bears a great potential to be implemented in diversified fields of pharmaceutical sciences.

  6. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.

  7. BLUES function method in computational physics

    NASA Astrophysics Data System (ADS)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  8. Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model

    NASA Astrophysics Data System (ADS)

    Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.

    2017-10-01

    Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.

  9. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  10. Microarray gene-expression study in fibroblast and lymphoblastoid cell lines from antipsychotic-naïve first-episode schizophrenia patients.

    PubMed

    Gassó, Patricia; Mas, Sergi; Rodríguez, Natalia; Boloc, Daniel; García-Cerro, Susana; Bernardo, Miquel; Lafuente, Amalia; Parellada, Eduard

    2017-12-01

    Schizophrenia (SZ) is a chronic psychiatric disorder whose onset of symptoms occurs in late adolescence and early adulthood. The etiology is complex and involves important gene-environment interactions. Microarray gene-expression studies on SZ have identified alterations in several biological processes. The heterogeneity in the results can be attributed to the use of different sample types and other important confounding factors including age, illness chronicity and antipsychotic exposure. The aim of the present microarray study was to analyze, for the first time to our knowledge, differences in gene expression profiles in 18 fibroblast (FCLs) and 14 lymphoblastoid cell lines (LCLs) from antipsychotic-naïve first-episode schizophrenia (FES) patients and healthy controls. We used an analytical approach based on protein-protein interaction network construction and functional annotation analysis to identify the biological processes that are altered in SZ. Significant differences in the expression of 32 genes were found when LCLs were assessed. The network and gene set enrichment approach revealed the involvement of similar biological processes in FCLs and LCLs, including apoptosis and related biological terms such as cell cycle, autophagy, cytoskeleton organization and response to stress and stimulus. Metabolism and other processes, including signal transduction, kinase activity and phosphorylation, were also identified. These results were replicated in two independent cohorts using the same analytical approach. This provides more evidence for altered apoptotic processes in antipsychotic-naïve FES patients and other important biological functions such as cytoskeleton organization and metabolism. The convergent results obtained in both peripheral cell models support their usefulness for transcriptome studies on SZ. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Calibration approach for extremely variable laser induced plasmas and a strategy to reduce the matrix effect in general

    NASA Astrophysics Data System (ADS)

    Lazic, V.; De Ninno, A.

    2017-11-01

    The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.

  12. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  13. Fabrication of antibody microarrays by light-induced covalent and oriented immobilization.

    PubMed

    Adak, Avijit K; Li, Ben-Yuan; Huang, Li-De; Lin, Ting-Wei; Chang, Tsung-Che; Hwang, Kuo Chu; Lin, Chun-Cheng

    2014-07-09

    Antibody microarrays have important applications for the sensitive detection of biologically important target molecules and as biosensors for clinical applications. Microarrays produced by oriented immobilization of antibodies generally have higher antigen-binding capacities than those in which antibodies are immobilized with random orientations. Here, we present a UV photo-cross-linking approach that utilizes boronic acid to achieve oriented immobilization of an antibody on a surface while retaining the antigen-binding activity of the immobilized antibody. A photoactive boronic acid probe was designed and synthesized in which boronic acid provided good affinity and specificity for the recognition of glycan chains on the Fc region of the antibody, enabling covalent tethering to the antibody upon exposure to UV light. Once irradiated with optimal UV exposure (16 mW/cm(2)), significant antibody immobilization on a boronic acid-presenting surface with maximal antigen detection sensitivity in a single step was achieved, thus obviating the necessity of prior antibody modifications. The developed approach is highly modular, as demonstrated by its implementation in sensitive sandwich immunoassays for the protein analytes Ricinus communis agglutinin 120, human prostate-specific antigen, and interleukin-6 with limits of detection of 7.4, 29, and 16 pM, respectively. Furthermore, the present system enabled the detection of multiple analytes in samples without any noticeable cross-reactivities. Antibody coupling via the use of boronic acid and UV light represents a practical, oriented immobilization method with significant implications for the construction of a large array of immunosensors for diagnostic applications.

  14. The receiver operational characteristic for binary classification with multiple indices and its application to the neuroimaging study of Alzheimer's disease.

    PubMed

    Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei

    2013-01-01

    Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.

  15. The Receiver Operational Characteristic for Binary Classification with Multiple Indices and Its Application to the Neuroimaging Study of Alzheimer’s Disease

    PubMed Central

    Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei

    2014-01-01

    Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553

  16. Modern Analytical Chemistry in the Contemporary World

    ERIC Educational Resources Information Center

    Šíma, Jan

    2016-01-01

    Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among…

  17. Combining Imagery and Models to Understand River Dynamics

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Mied, R. P.; Linzell, R. S.

    2014-12-01

    Rivers pose one of the most challenging environments to characterize. Their geometric complexity and continually changing position and character are difficult to measure under optimal circumstances. Further compounding the problem is the often inaccessibility of these areas around the globe. Yet details of the river bank position and bed elevation are essential elements in the construction of accurate predictive river models. To meet this challenge, remote sensing imagery is first used to initialize the construction of advanced high resolution river circulation models. In turn, such models are applied to dynamically interpret remotely-sensed surface features. A method has been developed to automatically extract water and shoreline locations from arbitrarily sourced high resolution (~1m gsd) visual spectrum imagery without recourse to the spectral or color information. The approach relies on quantifying the difference in image texture between the relatively smooth water surface and the comparatively rough surface of surrounding land. Processing the segmented land/water interface results in ordered, continuous shoreline coordinates that bound river model construction. In the absence of observed bed elevations, one of several available analytic bathymetry cross-sectional relations are applied to complete the river model configuration. Successful application of this approach to the Snohomish River, WA and the Pearl River, MS are demonstrated. Once constructed, a hydrodynamic model of the river model can also be applied to unravel the dynamics responsible for observed surface features in the imagery. At a creek-river confluence in the Potomac River, MD, an ebb tide front observed in the imagery is analyzed using the model. The result is knowledge that an ebb shoal located just outside of the creek must be present and is essential for front formation. Furthermore, the front is found to be persistent throughout the tidal cycle, although it changes sign between ebb and flood phases. The presence of the creek only minimally modifies the underlying currents.

  18. Development and optimization of SPE-HPLC-UV/ELSD for simultaneous determination of nine bioactive components in Shenqi Fuzheng Injection based on Quality by Design principles.

    PubMed

    Wang, Lu; Qu, Haibin

    2016-03-01

    A method combining solid phase extraction, high performance liquid chromatography, and ultraviolet/evaporative light scattering detection (SPE-HPLC-UV/ELSD) was developed according to Quality by Design (QbD) principles and used to assay nine bioactive compounds within a botanical drug, Shenqi Fuzheng Injection. Risk assessment and a Plackett-Burman design were utilized to evaluate the impact of 11 factors on the resolutions and signal-to-noise of chromatographic peaks. Multiple regression and Pareto ranking analysis indicated that the sorbent mass, sample volume, flow rate, column temperature, evaporator temperature, and gas flow rate were statistically significant (p < 0.05) in this procedure. Furthermore, a Box-Behnken design combined with response surface analysis was employed to study the relationships between the quality of SPE-HPLC-UV/ELSD analysis and four significant factors, i.e., flow rate, column temperature, evaporator temperature, and gas flow rate. An analytical design space of SPE-HPLC-UV/ELSD was then constructed by calculated Monte Carlo probability. In the presented approach, the operating parameters of sample preparation, chromatographic separation, and compound detection were investigated simultaneously. Eight terms of method validation, i.e., system-suitability tests, method robustness/ruggedness, sensitivity, precision, repeatability, linearity, accuracy, and stability, were accomplished at a selected working point. These results revealed that the QbD principles were suitable in the development of analytical procedures for samples in complex matrices. Meanwhile, the analytical quality and method robustness were validated by the analytical design space. The presented strategy provides a tutorial on the development of a robust QbD-compliant quantitative method for samples in complex matrices.

  19. Atomic density functional and diagram of structures in the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankudinov, V. E., E-mail: vladimir@ankudinov.org; Galenko, P. K.; Kropotin, N. V.

    2016-02-15

    The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindricalmore » tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.« less

  20. Analytical approach for collective diffusion: One-dimensional lattice with the nearest neighbor and the next nearest neighbor lateral interactions

    NASA Astrophysics Data System (ADS)

    Tarasenko, Alexander

    2018-01-01

    Diffusion of particles adsorbed on a homogeneous one-dimensional lattice is investigated using a theoretical approach and MC simulations. The analytical dependencies calculated in the framework of approach are tested using the numerical data. The perfect coincidence of the data obtained by these different methods demonstrates that the correctness of the approach based on the theory of the non-equilibrium statistical operator.

  1. Multi-scale modeling of solids as a composite of quantum mechanical (QM) and classical mechanical (CM) domains

    NASA Astrophysics Data System (ADS)

    Mallik, Aditi

    2005-11-01

    This thesis combines a creative project with an analytical discussion of that project. The creative project involves the creation of an approach to intermedia which I call "photonarrative." That is, photonarrative is a unique combining of various color photographs with quotations drawn from one particular work of literature. The work of literature to which the photographs are connected is not identified; nor do the photographs "illustrate" events from the work. In photonarrative, as I will show, the combining leads the viewer to a heightened level of interpretation and "seeing." The analytical portion of the thesis will include not only a recreation of the artistic process by which I designed and constructed this specific example of photonarrative, but also a theoretical consideration of the implications of this joining of the visual with the verbal for a general theory of aesthetics. In accordance with the interdisciplinary philosophy of the School of Arts and Humanities, this thesis involves a fusion of creative and critical acuity. The creative aspect involves a merging of verbal with visual art. The critical analysis touches on such disparate disciplines as literary theory, psychology, and aesthetics. (Abstract shortened by UMI.)

  2. A novel control algorithm for interaction between surface waves and a permeable floating structure

    NASA Astrophysics Data System (ADS)

    Tsai, Pei-Wei; Alsaedi, A.; Hayat, T.; Chen, Cheng-Wu

    2016-04-01

    An analytical solution is undertaken to describe the wave-induced flow field and the surge motion of a permeable platform structure with fuzzy controllers in an oceanic environment. In the design procedure of the controller, a parallel distributed compensation (PDC) scheme is utilized to construct a global fuzzy logic controller by blending all local state feedback controllers. A stability analysis is carried out for a real structure system by using Lyapunov method. The corresponding boundary value problems are then incorporated into scattering and radiation problems. They are analytically solved, based on separation of variables, to obtain series solutions in terms of the harmonic incident wave motion and surge motion. The dependence of the wave-induced flow field and its resonant frequency on wave characteristics and structure properties including platform width, thickness and mass has been thus drawn with a parametric approach. From which mathematical models are applied for the wave-induced displacement of the surge motion. A nonlinearly inverted pendulum system is employed to demonstrate that the controller tuned by swarm intelligence method can not only stabilize the nonlinear system, but has the robustness against external disturbance.

  3. An Integrated Model for Supplier Selection for a High-Tech Manufacturer

    NASA Astrophysics Data System (ADS)

    Lee, Amy H. I.; Kang, He-Yau; Lin, Chun-Yu

    2011-11-01

    Global competitiveness has become the biggest concern of manufacturing companies, especially in high-tech industries. Improving competitive edges in an environment with rapidly changing technological innovations and dynamic customer needs is essential for a firm to survive and to acquire a decent profit. Thus, the introduction of successful new products is a source of new sales and profits and is a necessity in the intense competitive international market. After a product is developed, a firm needs the cooperation of upstream suppliers to provide satisfactory components and parts for manufacturing final products. Therefore, the selection of suitable suppliers has also become a very important decision. In this study, an analytical approach is proposed to select the most appropriate critical-part suppliers in order to maintain a high reliability of the supply chain. A fuzzy analytic network process (FANP) model, which incorporates the benefits, opportunities, costs and risks (BOCR) concept, is constructed to evaluate various aspects of suppliers. The proposed model is adopted in a TFT-LCD manufacturer in Taiwan in evaluating the expected performance of suppliers with respect to each important factor, and an overall ranking of the suppliers can be generated as a result.

  4. An efficient and numerically stable procedure for generating sextic force fields in normal mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz

    In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less

  5. Virtual reality and the psyche. Some psychoanalytic approaches to media addiction.

    PubMed

    Weisel, Anja

    2015-04-01

    This paper explores the ramifications of excessive use of media on personality development, the development of symbolic and thinking functions and on psychic reality. In doing so, the questions of whether there are specific media objects possessing an intrinsic symbolic quality, and which attachments in the inner world of a child/adolescent can be mobilized or destroyed are discussed. By selecting specific material, computer gamers use their game to activate the field of a personal psychic reality. Hereby, they attempt some kind of self-healing. However, after leaving the game, conflicts and traumata re-enacted but unresolved in the game disappear from their temporary representation without generating any resonance in the gamer's psychic experience. Consequently, although states of mind and affects are activated in the computer game, their processing and integration fail; the game results in a compulsive repetition. The construction and consolidation of retrievable maturation and structural development, the representation of the unrepresentable, succeed in the context of the triangulating analytic relationship, initially through a jointly performed symbolic and narrative re-experience or the recreation of the game. Theoretical considerations are illustrated by means of clinical vignettes. © 2015, The Society of Analytical Psychology.

  6. An Experiential Research-Focused Approach: Implementation in a Nonlaboratory-Based Graduate-Level Analytical Chemistry Course

    ERIC Educational Resources Information Center

    Toh, Chee-Seng

    2007-01-01

    A project is described which incorporates nonlaboratory research skills in a graduate level course on analytical chemistry. This project will help students to grasp the basic principles and concepts of modern analytical techniques and also help them develop relevant research skills in analytical chemistry.

  7. Analytical Thinking, Analytical Action: Using Prelab Video Demonstrations and e-Quizzes to Improve Undergraduate Preparedness for Analytical Chemistry Practical Classes

    ERIC Educational Resources Information Center

    Jolley, Dianne F.; Wilson, Stephen R.; Kelso, Celine; O'Brien, Glennys; Mason, Claire E.

    2016-01-01

    This project utilizes visual and critical thinking approaches to develop a higher-education synergistic prelab training program for a large second-year undergraduate analytical chemistry class, directing more of the cognitive learning to the prelab phase. This enabled students to engage in more analytical thinking prior to engaging in the…

  8. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  9. Simple analytical model of a thermal diode

    NASA Astrophysics Data System (ADS)

    Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul

    2018-05-01

    Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.

  10. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  11. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    NASA Technical Reports Server (NTRS)

    Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  12. New method to design stellarator coils without the winding surface

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-01-01

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.

  13. CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH

    PubMed Central

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509

  14. Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.

    PubMed

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.

  15. Stress-strain state on non-thin plates and shells. Generalized theory (survey)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemish, Yu.N.; Khoma, I.Yu.

    1994-05-01

    In the first part of this survey, we examined exact and approximate analytic solutions of specific problems for thick shells and plates obtained on the basis of three-dimensional equations of the mathematical theory of elasticity. The second part of the survey, presented here, is devoted to systematization and analysis of studies made in regard to a generalized theory of plates and shells based on expansion of the sought functions into Fourier series in Legendre polynomials of the thickness coordinate. Methods are described for constructing systems of differential equations in the coefficients of the expansions (as functions of two independent variablesmore » and time), along with the corresponding boundary and initial conditions. Matters relating to substantiation of the given approach and its generalizations are also discussed.« less

  16. A multitrait-multisource confirmatory factor analytic approach to the construct validity of ADHD and ODD rating scales with Malaysian children.

    PubMed

    Gomez, Rapson; Burns, G Leonard; Walsh, James A; Hafetz, Nina

    2005-04-01

    Confirmatory factor analysis (CFA) was used to model a multitrait by multisource matrix to determine the convergent and discriminant validity of measures of attention-deficit hyperactivity disorder (ADHD)-inattention (IN), ADHD-hyperactivity/impulsivity (HI), and oppositional defiant disorder (ODD) in 917 Malaysian elementary school children. The three trait factors were ADHD-IN, ADHDHI, and ODD. The two source factors were parents and teachers. Similar to earlier studies with Australian and Brazilian children, the parent and teacher measures failed to show convergent and discriminant validity with Malaysian children. The study outlines the implications of such strong source effects in ADHD-IN, ADHD-HI, and ODD measures for the use of such parent and teacher scales to study the symptom dimensions.

  17. Slow-roll approximation in loop quantum cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luc, Joanna; Mielczarek, Jakub, E-mail: joanna.luc@uj.edu.pl, E-mail: jakub.mielczarek@uj.edu.pl

    The slow-roll approximation is an analytical approach to study dynamical properties of the inflationary universe. In this article, systematic construction of the slow-roll expansion for effective loop quantum cosmology is presented. The analysis is performed up to the fourth order in both slow-roll parameters and the parameter controlling the strength of deviation from the classical case. The expansion is performed for three types of the slow-roll parameters: Hubble slow-roll parameters, Hubble flow parameters and potential slow-roll parameters. An accuracy of the approximation is verified by comparison with the numerical phase space trajectories for the case with a massive potential term.more » The results obtained in this article may be helpful in the search for the subtle quantum gravitational effects with use of the cosmological data.« less

  18. Fabricating microfluidic valve master molds in SU-8 photoresist

    NASA Astrophysics Data System (ADS)

    Dy, Aaron J.; Cosmanescu, Alin; Sluka, James; Glazier, James A.; Stupack, Dwayne; Amarie, Dragos

    2014-05-01

    Multilayer soft lithography has become a powerful tool in analytical chemistry, biochemistry, material and life sciences, and medical research. Complex fluidic micro-circuits require reliable components that integrate easily into microchips. We introduce two novel approaches to master mold fabrication for constructing in-line micro-valves using SU-8. Our fabrication techniques enable robust and versatile integration of many lab-on-a-chip functions including filters, mixers, pumps, stream focusing and cell-culture chambers, with in-line valves. SU-8 created more robust valve master molds than the conventional positive photoresists used in multilayer soft lithography, but maintained the advantages of biocompatibility and rapid prototyping. As an example, we used valve master molds made of SU-8 to fabricate PDMS chips capable of precisely controlling beads or cells in solution.

  19. Analytical Protocol (GC/ECNIMS) for OSWER's Response to OIG Report (2005-P-00022) on Toxaphene Analysis

    EPA Science Inventory

    The research approached the large number and complexity of the analytes as four separate groups: technical toxaphene, toxaphene congeners (eight in number), chlordane, and organochlorine pesticides. This approach was advantageous because it eliminated potential interferences amon...

  20. Algebraic approach to small-world network models

    NASA Astrophysics Data System (ADS)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

Top