Sample records for physical modeling techniques

  1. The relevance of Newton's laws and selected principles of physics to dance techniques: Theory and application

    NASA Astrophysics Data System (ADS)

    Lei, Li

    1999-07-01

    In this study the researcher develops and presents a new model, founded on the laws of physics, for analyzing dance technique. Based on a pilot study of four advanced dance techniques, she creates a new model for diagnosing, analyzing and describing basic, intermediate and advanced dance techniques. The name for this model is ``PED,'' which stands for Physics of Expressive Dance. The research design consists of five phases: (1) Conduct a pilot study to analyze several advanced dance techniques chosen from Chinese dance, modem dance, and ballet; (2) Based on learning obtained from the pilot study, create the PED Model for analyzing dance technique; (3) Apply this model to eight categories of dance technique; (4) Select two advanced dance techniques from each category and analyze these sample techniques to demonstrate how the model works; (5) Develop an evaluation framework and use it to evaluate the effectiveness of the model, taking into account both scientific and artistic aspects of dance training. In this study the researcher presents new solutions to three problems highly relevant to dance education: (1) Dancers attempting to learn difficult movements often fail because they are unaware of physics laws; (2) Even those who do master difficult movements can suffer injury due to incorrect training methods; (3) Even the best dancers can waste time learning by trial and error, without scientific instruction. In addition, the researcher discusses how the application of the PED model can benefit dancers, allowing them to avoid inefficient and ineffective movements and freeing them to focus on the artistic expression of dance performance. This study is unique, presenting the first comprehensive system for analyzing dance techniques in terms of physics laws. The results of this study are useful, allowing a new level of awareness about dance techniques that dance professionals can utilize for more effective and efficient teaching and learning. The approach utilized in this study is universal, and can be applied to any dance movement and to any dance style.

  2. Discrete-time modelling of musical instruments

    NASA Astrophysics Data System (ADS)

    Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.

  3. Novel Plasmonic and Hyberbolic Optical Materials for Control of Quantum Nanoemitters

    DTIC Science & Technology

    2016-12-08

    properties, metal ion implantation techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. 15. SUBJECT TERMS nanotechnology 16...techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. During the course of this project we studied plasmonic

  4. A Comparison between Physics-based and Polytropic MHD Models for Stellar Coronae and Stellar Winds of Solar Analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.

    The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is amore » polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.« less

  5. Risk assessments using the Strain Index and the TLV for HAL, Part II: Multi-task jobs and prevalence of CTS.

    PubMed

    Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun

    2018-02-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.

  6. Risk assessments using the Strain Index and the TLV for HAL, Part I: Task and multi-task job exposure classifications.

    PubMed

    Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun

    2017-12-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.

  7. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    ERIC Educational Resources Information Center

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  8. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  9. Next generation initiation techniques

    NASA Technical Reports Server (NTRS)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.

  10. Effects of preprocessing Landsat MSS data on derived features

    NASA Technical Reports Server (NTRS)

    Parris, T. M.; Cicone, R. C.

    1983-01-01

    Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.

  11. Relations Between Autonomous Motivation and Leisure-Time Physical Activity Participation: The Mediating Role of Self-Regulation Techniques.

    PubMed

    Nurmi, Johanna; Hagger, Martin S; Haukkala, Ari; Araújo-Soares, Vera; Hankonen, Nelli

    2016-04-01

    This study tested the predictive validity of a multitheory process model in which the effect of autonomous motivation from self-determination theory on physical activity participation is mediated by the adoption of self-regulatory techniques based on control theory. Finnish adolescents (N = 411, aged 17-19) completed a prospective survey including validated measures of the predictors and physical activity, at baseline and after one month (N = 177). A subsample used an accelerometer to objectively measure physical activity and further validate the physical activity self-report assessment tool (n = 44). Autonomous motivation statistically significantly predicted action planning, coping planning, and self-monitoring. Coping planning and self-monitoring mediated the effect of autonomous motivation on physical activity, although self-monitoring was the most prominent. Controlled motivation had no effect on self-regulation techniques or physical activity. Developing interventions that support autonomous motivation for physical activity may foster increased engagement in self-regulation techniques and positively affect physical activity behavior.

  12. Spectral Analysis and Experimental Modeling of Ice Accretion Roughness

    NASA Technical Reports Server (NTRS)

    Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.

    1996-01-01

    A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.

  13. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    NASA Technical Reports Server (NTRS)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  14. Nonlinear ultrasonics for material state awareness

    NASA Astrophysics Data System (ADS)

    Jacobs, L. J.

    2014-02-01

    Predictive health monitoring of structural components will require the development of advanced sensing techniques capable of providing quantitative information on the damage state of structural materials. By focusing on nonlinear acoustic techniques, it is possible to measure absolute, strength based material parameters that can then be coupled with uncertainty models to enable accurate and quantitative life prediction. Starting at the material level, this review will present current research that involves a combination of sensing techniques and physics-based models to characterize damage in metallic materials. In metals, these nonlinear ultrasonic measurements can sense material state, before the formation of micro- and macro-cracks. Typically, cracks of a measurable size appear quite late in a component's total life, while the material's integrity in terms of toughness and strength gradually decreases due to the microplasticity (dislocations) and associated change in the material's microstructure. This review focuses on second harmonic generation techniques. Since these nonlinear acoustic techniques are acoustic wave based, component interrogation can be performed with bulk, surface and guided waves using the same underlying material physics; these nonlinear ultrasonic techniques provide results which are independent of the wave type used. Recent physics-based models consider the evolution of damage due to dislocations, slip bands, interstitials, and precipitates in the lattice structure, which can lead to localized damage.

  15. Model based Computerized Ionospheric Tomography in space and time

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  16. Playful Physics

    NASA Technical Reports Server (NTRS)

    Weaver, David

    2008-01-01

    Effectively communicate qualitative and quantitative information orally and in writing. Explain the application of fundamental physical principles to various physical phenomena. Apply appropriate problem-solving techniques to practical and meaningful problems using graphical, mathematical, and written modeling tools. Work effectively in collaborative groups.

  17. Printing Space: Using 3D Printing of Digital Terrain Models in Geosciences Education and Research

    ERIC Educational Resources Information Center

    Horowitz, Seth S.; Schultz, Peter H.

    2014-01-01

    Data visualization is a core component of every scientific project; however, generation of physical models previously depended on expensive or labor-intensive molding, sculpting, or laser sintering techniques. Physical models have the advantage of providing not only visual but also tactile modes of inspection, thereby allowing easier visual…

  18. Employment of adaptive learning techniques for the discrimination of acoustic emissions

    NASA Astrophysics Data System (ADS)

    Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.

    1983-11-01

    The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.

  19. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  20. Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.

    PubMed

    Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias

    2013-04-01

    Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.

  1. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  2. Impacts of spectral nudging on the simulated surface air temperature in summer compared with the selection of shortwave radiation and land surface model physics parameterization in a high-resolution regional atmospheric model

    NASA Astrophysics Data System (ADS)

    Park, Jun; Hwang, Seung-On

    2017-11-01

    The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.

  3. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  4. The space shuttle payload planning working groups. Volume 8: Earth and ocean physics

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The findings and recommendations of the Earth and Ocean Physics working group of the space shuttle payload planning activity are presented. The requirements for the space shuttle mission are defined as: (1) precision measurement for earth and ocean physics experiments, (2) development and demonstration of new and improved sensors and analytical techniques, (3) acquisition of surface truth data for evaluation of new measurement techniques, (4) conduct of critical experiments to validate geophysical phenomena and instrumental results, and (5) development and validation of analytical/experimental models for global ocean dynamics and solid earth dynamics/earthquake prediction. Tables of data are presented to show the flight schedule estimated costs, and the mission model.

  5. Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992

    DTIC Science & Technology

    1992-01-01

    previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers

  6. Bayesian component separation: The Planck experience

    NASA Astrophysics Data System (ADS)

    Wehus, Ingunn Kathrine; Eriksen, Hans Kristian

    2018-05-01

    Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.

  7. A new data assimilation engine for physics-based thermospheric density models

    NASA Astrophysics Data System (ADS)

    Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.

    2017-12-01

    The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.

  8. The Physics of a Gymnastics Flight Element

    ERIC Educational Resources Information Center

    Contakos, Jonas; Carlton, Les G.; Thompson, Bruce; Suddaby, Rick

    2009-01-01

    From its inception, performance in the sport of gymnastics has relied on the laws of physics to create movement patterns and static postures that appear almost impossible. In general, gymnastics is physics in motion and can provide an ideal framework for studying basic human modeling techniques and physical principles. Using low-end technology and…

  9. Organizational Constraints and Goal Setting

    ERIC Educational Resources Information Center

    Putney, Frederick B.; Wotman, Stephen

    1978-01-01

    Management modeling techniques are applied to setting operational and capital goals using cost analysis techniques in this case study at the Columbia University School of Dental and Oral Surgery. The model was created as a planning tool used in developing a financially feasible operating plan and a 100 percent physical renewal plan. (LBH)

  10. Applicability of three-dimensional imaging techniques in fetal medicine*

    PubMed Central

    Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson

    2016-01-01

    Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540

  11. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  12. Mastery Learning in Physical Education.

    ERIC Educational Resources Information Center

    Annarino, Anthony

    This paper discusses the design of a physical education curriculum to be used in advanced secondary physical education programs and in university basic instructional programs; the design is based on the premise of mastery learning and employs programed instructional techniques. The effective implementation of a mastery learning model necessitates…

  13. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  14. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  15. Data assimilation of ground GPG total electron content into a physics-based ionosheric model by use of the Kalman filter

    NASA Technical Reports Server (NTRS)

    Hajj, G. A.; Wilson, B. D.; Wang, C.; Pi, X.; Rosen, I. G.

    2004-01-01

    A three-dimensional (3-D) Global Assimilative Ionospheric Model (GAIM) is currently being developed by a joint University of Southern California and Jet Propulsion Laboratory (JPL) team. To estimate the electron density on a global grid, GAIM uses a first-principles ionospheric physics model and the Kalman filter as one of its possible estimation techniques.

  16. Exploring Physics in the Classroom

    ERIC Educational Resources Information Center

    Amann, George

    2005-01-01

    The key to learning is student involvement! This American Association of Physics Teachers/Physics Teaching Resource Agents (AAPT/PTRA) manual presents examples of two techniques that are proven to increase student involvement in your classroom. Based on the "5E" model of learning, exploratories are designed to get your students excited about the…

  17. Multi-physics CFD simulations in engineering

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto

    2013-08-01

    Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.

  18. Development of the Instructional Model of Reading English Strategies for Enhancing Sophomore Students' Learning Achievements in the Institute of Physical Education in the Northeastern Region of Thailand

    ERIC Educational Resources Information Center

    Whankhom, Prawit; Phusawisot, Pilanut; Sayankena, Patcharanon

    2016-01-01

    The aim of this research is to develop and verify the effectiveness of an instructional model of reading English strategies for students of Mahasarakham Institute of Physical Education in the Northeastern region through survey. Classroom action research techniques with the two groups of sample sizes of 34 sophomore physical students as a control…

  19. Coupled two-dimensional edge plasma and neutral gas modeling of tokamak scrape-off-layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maingi, Rajesh

    1992-08-01

    The objective of this study is to devise a detailed description of the tokamak scrape-off-layer (SOL), which includes the best available models of both the plasma and neutral species and the strong coupling between the two in many SOL regimes. A good estimate of both particle flux and heat flux profiles at the limiter/divertor target plates is desired. Peak heat flux is one of the limiting factors in determining the survival probability of plasma-facing-components at high power levels. Plate particle flux affects the neutral flux to the pump, which determines the particle exhaust rate. A technique which couples a two-dimensionalmore » (2-D) plasma and a 2-D neutral transport code has been developed (coupled code technique), but this procedure requires large amounts of computer time. Relevant physics has been added to an existing two-neutral-species model which takes the SOL plasma/neutral coupling into account in a simple manner (molecular physics model), and this model is compared with the coupled code technique mentioned above. The molecular physics model is benchmarked against experimental data from a divertor tokamak (DIII-D), and a similar model (single-species model) is benchmarked against data from a pump-limiter tokamak (Tore Supra). The models are then used to examine two key issues: free-streaming-limits (ion energy conduction and momentum flux) and the effects of the non-orthogonal geometry of magnetic flux surfaces and target plates on edge plasma parameter profiles.« less

  20. Physical Modeling of Microtubules Network

    NASA Astrophysics Data System (ADS)

    Allain, Pierre; Kervrann, Charles

    2014-10-01

    Microtubules (MT) are highly dynamic tubulin polymers that are involved in many cellular processes such as mitosis, intracellular cell organization and vesicular transport. Nevertheless, the modeling of cytoskeleton and MT dynamics based on physical properties is difficult to achieve. Using the Euler-Bernoulli beam theory, we propose to model the rigidity of microtubules on a physical basis using forces, mass and acceleration. In addition, we link microtubules growth and shrinkage to the presence of molecules (e.g. GTP-tubulin) in the cytosol. The overall model enables linking cytosol to microtubules dynamics in a constant state space thus allowing usage of data assimilation techniques.

  1. The influence of instructional interactions on students’ mental models about the quantization of physical observables: a modern physics course case

    NASA Astrophysics Data System (ADS)

    Didiş Körhasan, Nilüfer; Eryılmaz, Ali; Erkoç, Şakir

    2016-01-01

    Mental models are coherently organized knowledge structures used to explain phenomena. They interact with social environments and evolve with the interaction. Lacking daily experience with phenomena, the social interaction gains much more importance. In this part of our multiphase study, we investigate how instructional interactions influenced students’ mental models about the quantization of physical observables. Class observations and interviews were analysed by studying students’ mental models constructed in a modern physics course during an academic semester. The research revealed that students’ mental models were influenced by (1) the manner of teaching, including instructional methodologies and content specific techniques used by the instructor, (2) order of the topics and familiarity with concepts, and (3) peers.

  2. Modeling of metal thin film growth: Linking angstrom-scale molecular dynamics results to micron-scale film topographies

    NASA Astrophysics Data System (ADS)

    Hansen, U.; Rodgers, S.; Jensen, K. F.

    2000-07-01

    A general method for modeling ionized physical vapor deposition is presented. As an example, the method is applied to growth of an aluminum film in the presence of an ionized argon flux. Molecular dynamics techniques are used to examine the surface adsorption, reflection, and sputter reactions taking place during ionized physical vapor deposition. We predict their relative probabilities and discuss their dependence on energy and incident angle. Subsequently, we combine the information obtained from molecular dynamics with a line of sight transport model in a two-dimensional feature, incorporating all effects of reemission and resputtering. This provides a complete growth rate model that allows inclusion of energy- and angular-dependent reaction rates. Finally, a level-set approach is used to describe the morphology of the growing film. We thus arrive at a computationally highly efficient and accurate scheme to model the growth of thin films. We demonstrate the capabilities of the model predicting the major differences on Al film topographies between conventional and ionized sputter deposition techniques studying thin film growth under ionized physical vapor deposition conditions with different Ar fluxes.

  3. A Model of the Creative Process Based on Quantum Physics and Vedic Science.

    ERIC Educational Resources Information Center

    Rose, Laura Hall

    1988-01-01

    Using tenets from Vedic science and quantum physics, this model of the creative process suggests that the unified field of creation is pure consciousness, and that the development of the creative process within individuals mirrors the creative process within the universe. Rational and supra-rational creative thinking techniques are also described.…

  4. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  5. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Mali, V. K.; Kuiry, S. N.

    2015-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.

  6. A methodology to generate high-resolution digital elevation model (DEM) and surface water profile for a physical model using close range photogrammetric (CRP) technique

    NASA Astrophysics Data System (ADS)

    Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.

    2014-12-01

    Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.

  7. Implementation of the US EPA (United States Environmental Protection Agency) Regional Oxidant Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, J.H.

    1984-05-01

    Model design, implementation and quality assurance procedures can have a significant impact on the effectiveness of long term utility of any modeling approach. The Regional Oxidant Modeling System (ROMS) is exceptionally complex because it treats all chemical and physical processes thought to affect ozone concentration on a regional scale. Thus, to effectively illustrate useful design and implementation techniques, this paper describes the general modeling framework which forms the basis of the ROMS. This framework is flexible enough to allow straightforward update or replacement of the chemical kinetics mechanism and/or any theoretical formulations of the physical processes. Use of the Jacksonmore » Structured Programming (JSP) method to implement this modeling framework has not only increased programmer productivity and quality of the resulting programs, but also has provided standardized program design, dynamic documentation, and easily maintainable and transportable code. A summary of the JSP method is presented to encourage modelers to pursue this technique in their own model development efforts. In addition, since data preparation is such an integral part of a successful modeling system, the ROMS processor network is described with emphasis on the internal quality control techniques.« less

  8. The Politics of Pleasure: An Ethnographic Examination Exploring the Dominance of the Multi-Activity Sport-Based Physical Education Model

    ERIC Educational Resources Information Center

    Gerdin, Göran; Pringle, Richard

    2017-01-01

    Kirk warns that physical education (PE) exists in a precarious situation as the dominance of the multi-activity sport-techniques model, and its associated problems, threatens the long-term educational survival of PE. Yet he also notes that although the model is problematic it is highly resistant to change. In this paper, we draw on the results of…

  9. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  10. Passive Optical Technique to Measure Physical Properties of a Vibrating Surface

    DTIC Science & Technology

    2014-01-01

    it is not necessary to understand the details of a non-Lambertian BRDF to detect surface vibration phenomena, an accurate model incorporating physics...summarize the discussion of BRDF , while a physics-based BRDF model is not necessary to use scattered light as a surface vibration diagnostic, it may...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2014 2

  11. Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves

    NASA Astrophysics Data System (ADS)

    Wattanakasiwich, P.; Ananta, S.

    2010-07-01

    In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.

  12. DAWN (Design Assistant Workstation) for advanced physical-chemical life support systems

    NASA Technical Reports Server (NTRS)

    Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.

    1989-01-01

    This paper reports the results of a project supported by the National Aeronautics and Space Administration, Office of Aeronautics and Space Technology (NASA-OAST) under the Advanced Life Support Development Program. It is an initial attempt to integrate artificial intelligence techniques (via expert systems) with conventional quantitative modeling tools for advanced physical-chemical life support systems. The addition of artificial intelligence techniques will assist the designer in the definition and simulation of loosely/well-defined life support processes/problems as well as assist in the capture of design knowledge, both quantitative and qualitative. Expert system and conventional modeling tools are integrated to provide a design workstation that assists the engineer/scientist in creating, evaluating, documenting and optimizing physical-chemical life support systems for short-term and extended duration missions.

  13. Spin formalism and applications to new physics searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, H.E.

    1994-12-01

    An introduction to spin techniques in particle physics is given. Among the topics covered are: helicity formalism and its applications to the decay and scattering of spin-1/2 and spin-1 particles, techniques for evaluating helicity amplitudes (including projection operator methods and the spinor helicity method), and density matrix techniques. The utility of polarization and spin correlations for untangling new physics beyond the Standard Model at future colliders such as the LHC and a high energy e{sup +}e{sup {minus}} linear collider is then considered. A number of detailed examples are explored including the search for low-energy supersymmetry, a non-minimal Higgs boson sector,more » and new gauge bosons beyond the W{sup {+-}} and Z.« less

  14. Hardware-in-the-Loop Modeling and Simulation Methods for Daylight Systems in Buildings

    NASA Astrophysics Data System (ADS)

    Mead, Alex Robert

    This dissertation introduces hardware-in-the-loop modeling and simulation techniques to the daylighting community, with specific application to complex fenestration systems. No such application of this class of techniques, optimally combining mathematical-modeling and physical-modeling experimentation, is known to the author previously in the literature. Daylighting systems in buildings have a large impact on both the energy usage of a building as well as the occupant experience within a space. As such, a renewed interest has been placed on designing and constructing buildings with an emphasis on daylighting in recent times as part of the "green movement.''. Within daylighting systems, a specific subclass of building envelope is receiving much attention: complex fenestration systems (CFSs). CFSs are unique as compared to regular fenestration systems (e.g. glazing) in the regard that they allow for non-specular transmission of daylight into a space. This non-specular nature can be leveraged by designers to "optimize'' the times of the day and the days of the year that daylight enters a space. Examples of CFSs include: Venetian blinds, woven fabric shades, and prismatic window coatings. In order to leverage the non-specular transmission properties of CFSs, however, engineering analysis techniques capable of faithfully representing the physics of these systems are needed. Traditionally, the analysis techniques available to the daylighting community fall broadly into three classes: simplified techniques, mathematical-modeling and simulation, and physical-modeling and experimentation. Simplified techniques use "rules-of-thumb'' heuristics to provide insights for simple daylighting systems. Mathematical-modeling and simulation use complex numerical models to provide more detailed insights into system performance. Finally, physical-models can be instrumented and excited using artificial and natural light sources to provide performance insight into a daylighting system. Each class of techniques, broadly speaking however, has advantages and disadvantages with respect to the cost of execution (e.g. money, time, expertise) and the fidelity of the provided insight into the performance of the daylighting system. This varying tradeoff of cost and insight between the techniques determines which techniques are employed for which projects. Daylighting systems with CFS components, however, when considered for simulation with respect to these traditional technique classes, defy high fidelity analysis. Simplified techniques are clearly not applicable. Mathematical-models must have great complexity in order to capture the non-specular transmission accurately, which greatly limit their applicability. This leaves physical modeling, the most costly, as the preferred method for CFS. While mathematical-modeling and simulation methods do exist, they are in general costly and and still approximations of the underlying CFS behavior. Meaning in fact, measurements of CFSs are currently the only practical method to capture the behavior of CFSs. Traditional measurements of CFSs transmission and reflection properties are conducted using an instrument called a goniophotometer and produce a measurement in the form of a Bidirectional Scatter Distribution Function (BSDF) based on the Klems Basis. This measurement must be executed for each possible state of the CFS, hence only a subset of the possible behaviors can be captured for CFSs with continuously varying configurations. In the current era of rapid prototyping (e.g. 3D printing) and automated control of buildings including daylighting systems, a new analysis technique is needed which can faithfully represent these CFSs which are being designed and constructed at an increasing rate. Hardware-in-the-loop modeling and simulation is a perfect fit to the current need of analyzing daylighting systems with CFSs. In the proposed hardware-in-the-loop modeling and simulation approach of this dissertation, physical-models of real CFSs are excited using either natural or artificial light. The exiting luminance distribution from these CFSs is measured and used as inputs to a Radiance mathematical-model of the interior of the space, which is proposed to be lit by the CFS containing daylighting system. Hence, the components of the total daylighting and building system which are not mathematically-modeled well, the CFS, are physically excited and measured, while the components which are modeled properly, namely the interior building space, are mathematically-modeled. In order to excite and measure CFSs behavior, a novel parallel goniophotometer, referred to as the CUBE 2.0, is developed in this dissertation. The CUBE 2.0 measures the input illuminance distribution and the output luminance distribution with respect to a CFS under test. Further, the process is fully automated allowing for deployable experiments on proposed building sites, as well as in laboratory based experiments. In this dissertation, three CFSs, two commercially available and one novel--Twitchell's Textilene 80 Black, Twitchell's Shade View Ebony, and Translucent Concrete Panels (TCP)--are simulated on the CUBE 2.0 system for daylong deployments at one minute time steps. These CFSs are assumed to be placed in the glazing space within the Reference Office Radiance model, for which horizontal illuminance on a work plane of 0.8 m height is calculated for each time step. While Shade View Ebony and TCPs are unmeasured CFSs with respect to BSDF, Textilene 80 Black has been previously measured. As such a validation of the CUBE 2.0 using the goniophotometer measured BSDF is presented, with measurement errors of the horizontal illuminance between +3% and -10%. These error levels are considered to be valid within experimental daylighting investigations. Non-validated results are also presented in full for both Shade View Ebony as well as TCP. Concluding remarks and future directions for HWiL simulation close the dissertation.

  15. Invited review article: physics and Monte Carlo techniques as relevant to cryogenic, phonon, and ionization readout of Cryogenic Dark Matter Search radiation detectors.

    PubMed

    Leman, Steven W

    2012-09-01

    This review discusses detector physics and Monte Carlo techniques for cryogenic, radiation detectors that utilize combined phonon and ionization readout. A general review of cryogenic phonon and charge transport is provided along with specific details of the Cryogenic Dark Matter Search detector instrumentation. In particular, this review covers quasidiffusive phonon transport, which includes phonon focusing, anharmonic decay, and isotope scattering. The interaction of phonons in the detector surface is discussed along with the downconversion of phonons in superconducting films. The charge transport physics include a mass tensor which results from the crystal band structure and is modeled with a Herring-Vogt transformation. Charge scattering processes involve the creation of Neganov-Luke phonons. Transition-edge-sensor (TES) simulations include a full electric circuit description and all thermal processes including Joule heating, cooling to the substrate, and thermal diffusion within the TES, the latter of which is necessary to model normal-superconducting phase separation. Relevant numerical constants are provided for these physical processes in germanium, silicon, aluminum, and tungsten. Random number sampling methods including inverse cumulative distribution function (CDF) and rejection techniques are reviewed. To improve the efficiency of charge transport modeling, an additional second order inverse CDF method is developed here along with an efficient barycentric coordinate sampling method of electric fields. Results are provided in a manner that is convenient for use in Monte Carlo and references are provided for validation of these models.

  16. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  17. Informatics and physics intersubject communications in the 7th and 8th grades of the basics level by means of computer modeling

    NASA Astrophysics Data System (ADS)

    Vasina, A. V.

    2017-01-01

    The author of the article imparts pedagogical experience of realization of intersubject communications of school basic courses of informatics, technology and physics through research activity of students with the use of specialized programs for the development and studying of computer models of physical processes. The considered technique is based on the principles of independent scholar activity of students, intersubject communications such as educational disciplines of technology, physics and informatics; it helps to develop the research activity of students and a professional and practical orientation of education. As an example the lesson of modeling of flotation with the use of the environment "1C Physical simulator" is considered.

  18. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  19. An uncertainty model of acoustic metamaterials with random parameters

    NASA Astrophysics Data System (ADS)

    He, Z. C.; Hu, J. Y.; Li, Eric

    2018-01-01

    Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.

  20. Data-driven Applications for the Sun-Earth System

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.

    2016-12-01

    Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.

  1. What Factors Determine the Uptake of A-Level Physics?

    ERIC Educational Resources Information Center

    Gill, Tim; Bell, John F.

    2013-01-01

    There has been much concern recently in the UK about the decline in the number of students studying physics beyond age 16. To investigate why this might be we used data from a national database of student qualifications and a multilevel modelling technique to investigate which factors had the greatest impact on the uptake of physics at Advanced…

  2. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    PubMed

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical predictors, while "day of the year" is a statistical proxy or surrogate for missing or unknown predictors. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. On the physical basis of a theory of human thermoregulation.

    NASA Technical Reports Server (NTRS)

    Iberall, A. S.; Schindler, A. M.

    1973-01-01

    Theoretical study of the physical factors which are responsible for thermoregulation in nude resting humans in a physical steady state. The behavior of oxidative metabolism, evaporative and convective thermal fluxes, fluid heat transfer, internal and surface temperatures, and evaporative phase transitions is studied by physiological/physical modeling techniques. The modeling is based on the theories that the body has a vital core with autothermoregulation, that the vital core contracts longitudinally, that the temperature of peripheral regions and extremities decreases towards the ambient, and that a significant portion of the evaporative heat may be lost underneath the skin. A theoretical basis is derived for a consistent modeling of steady-state thermoregulation on the basis of these theories.

  4. Improving wave forecasting by integrating ensemble modelling and machine learning

    NASA Astrophysics Data System (ADS)

    O'Donncha, F.; Zhang, Y.; James, S. C.

    2017-12-01

    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  5. Computer Code for Interpreting 13C NMR Relaxation Measurements with Specific Models of Molecular Motion: The Rigid Isotropic and Symmetric Top Rotor Models and the Flexible Symmetric Top Rotor Model

    DTIC Science & Technology

    2017-01-01

    unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT: Carbon-13 nuclear magnetic resonance (13C NMR) spectroscopy is a powerful technique for...FLEXIBLE SYMMETRIC TOP ROTOR MODEL 1. INTRODUCTION Nuclear magnetic resonance (NMR) spectroscopy is a tremendously powerful technique for...application of NMR spectroscopy concerns the property of molecular motion, which is related to many physical, and even biological, functions of molecules in

  6. Project Super Heart--Year One.

    ERIC Educational Resources Information Center

    Bellardini, Harry; And Others

    1980-01-01

    A model cardiovascular disease prevention program for young children is described. Components include physical examinations, health education (anatomy and physiology of the cardiovascular system), nutrition instruction, first aid techniques, role modeling, and environmental engineering. (JN)

  7. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  8. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  9. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  10. Optimization model for UDWDM-PON deployment based on physical restrictions and asymmetric user's clustering

    NASA Astrophysics Data System (ADS)

    Arévalo, Germán. V.; Hincapié, Roberto C.; Sierra, Javier E.

    2015-09-01

    UDWDM PON is a leading technology oriented to provide ultra-high bandwidth to final users while profiting the physical channels' capability. One of the main drawbacks of UDWDM technique is the fact that the nonlinear effects, like FWM, become stronger due to the close spectral proximity among channels. This work proposes a model for the optimal deployment of this type of networks taking into account the fiber length limitations imposed by physical restrictions related with the fiber's data transmission as well as the users' asymmetric distribution in a provided region. The proposed model employs the data transmission related effects in UDWDM PON as restrictions in the optimization problem and also considers the user's asymmetric clustering and the subdivision of the users region though a Voronoi geometric partition technique. Here it is considered de Voronoi dual graph, it is the Delaunay Triangulation, as the planar graph for resolving the problem related with the minimum weight of the fiber links.

  11. Large Terrain Modeling and Visualization for Planets

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher

    2011-01-01

    Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.

  12. Alignment issues, correlation techniques and their assessment for a visible light imaging-based 3D printer quality control system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2016-05-01

    Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.

  13. Relative Motion of the WDS 05110+3203 STF 648 System, With a Protocol for Calculating Relative Motion

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2010-07-01

    Relative motion studies of visual double stars can be investigated using least squares regression techniques and readily accessible programs such as Microsoft Excel and a calculator. Optical pairs differ from physical pairs under most geometries in both their simple scatter plots and their regression models. A step-by-step protocol for estimating the rectilinear elements of an optical pair is presented. The characteristics of physical pairs using these techniques are discussed.

  14. Toward a comprehensive hybrid physical-virtual reality simulator of peripheral anesthesia with ultrasound and neurostimulator guidance.

    PubMed

    Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A

    2011-01-01

    We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.

  15. Teaching Biology to Visually Handicapped Students. Resource Manual.

    ERIC Educational Resources Information Center

    Ricker, Kenneth S.

    This resource manual presents numerous techniques for adapting science activities to the visually handicapped student, applicable to introductory biology courses in which microscopes are used extensively in the laboratory. Chapters include information on the following: alternative microscopic viewing techniques, physical models, tactile diagrams,…

  16. The Use of 3D Printing Technology in the Ilizarov Method Treatment: Pilot Study.

    PubMed

    Burzyńska, Karolina; Morasiewicz, Piotr; Filipiak, Jarosław

    2016-01-01

    Significant developments in additive manufacturing technology have occurred in recent years. 3D printing techniques can also be helpful in the Ilizarov method treatment. The aim of this study was to evaluate the usefulness of 3D printing technology in the Ilizarov method treatment. Physical models of bones used to plan the spatial design of Ilizarov external fixator were manufactured by FDM (Fused Deposition Modeling) spatial printing technology. Bone models were made of poly(L-lactide) (PLA). Printed 3D models of both lower leg bones allow doctors to prepare in advance for the Ilizarov method treatment: detailed consideration of the spatial configuration of the external fixation, experimental assembly of the Ilizarov external fixator onto the physical models of bones prior to surgery, planning individual osteotomy level and Kirschner wires introduction sites. Printed 3D bone models allow for accurate preparation of the Ilizarov apparatus spatially matched to the size of the bones and prospective bone distortion. Employment of the printed 3D models of bone will enable a more precise design of the apparatus, which is especially useful in multiplanar distortion and in the treatment of axis distortion and limb length discrepancy in young children. In the course of planning the use of physical models manufactured with additive technology, attention should be paid to certain technical aspects of model printing that have an impact on the accuracy of mapping of the geometry and physical properties of the model. 3D printing technique is very useful in 3D planning of the Ilizarov method treatment.

  17. Imaging plasmas at the Earth and other planets

    NASA Astrophysics Data System (ADS)

    Mitchell, D. G.

    2006-05-01

    The field of space physics, both at Earth and at other planets, was for decades a science based on local observations. By stitching together measurements of plasmas and fields from multiple locations either simultaneously or for similar conditions over time, and by comparing those measurements against models of the physical systems, great progress was made in understanding the physics of Earth and planetary magnetospheres, ionospheres, and their interactions with the solar wind. However, the pictures of the magnetospheres were typically statistical, and the large-scale global models were poorly constrained by observation. This situation changed dramatically with global auroral imaging, which provided snapshots and movies of the effects of field aligned currents and particle precipitation over the entire auroral oval during quiet and disturbed times. And with the advent of global energetic neutral atom (ENA) and extreme ultraviolet (EUV) imaging, global constraints have similarly been added to ring current and plasmaspheric models, respectively. Such global constraints on global models are very useful for validating the physics represented in those models, physics of energy and momentum transport, electric and magnetic field distribution, and magnetosphere-ionosphere coupling. These techniques are also proving valuable at other planets. For example with Hubble Space Telescope imaging of Jupiter and Saturn auroras, and ENA imaging at Jupiter and Saturn, we are gaining new insights into the magnetic fields, gas-plasma interactions, magnetospheric dynamics, and magnetosphere-ionosphere coupling at the giant planets. These techniques, especially ENA and EUV imaging, rely on very recent and evolving technological capabilities. And because ENA and EUV techniques apply to optically thin media, interpretation of their measurements require sophisticated inversion procedures, which are still under development. We will discuss the directions new developments in imaging are taking, what technologies and mission scenarios might best take advantage of them, and how our understanding of the Earth's and other planets' plasma environments may benefit from such advancements.

  18. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  19. Deformation-based augmented reality for hepatic surgery.

    PubMed

    Haouchine, Nazim; Dequidt, Jérémie; Berger, Marie-Odile; Cotin, Stéphane

    2013-01-01

    In this paper we introduce a method for augmenting the laparoscopic view during hepatic tumor resection. Using augmented reality techniques, vessels, tumors and cutting planes computed from pre-operative data can be overlaid onto the laparoscopic video. Compared to current techniques, which are limited to a rigid registration of the pre-operative liver anatomy with the intra-operative image, we propose a real-time, physics-based, non-rigid registration. The main strength of our approach is that the deformable model can also be used to regularize the data extracted from the computer vision algorithms. We show preliminary results on a video sequence which clearly highlights the interest of using physics-based model for elastic registration.

  20. Automatic determination of fault effects on aircraft functionality

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1989-01-01

    The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.

  1. Integrated Formulation of Beacon-Based Exception Analysis for Multimissions

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail

    2003-01-01

    Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,

  2. Apps to promote physical activity among adults: a review and content analysis.

    PubMed

    Middelweerd, Anouk; Mollee, Julia S; van der Wal, C Natalie; Brug, Johannes; Te Velde, Saskia J

    2014-07-25

    In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. On average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.

  3. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    USGS Publications Warehouse

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  4. Techniques for determining physical zones of influence

    DOEpatents

    Hamann, Hendrik F; Lopez-Marrero, Vanessa

    2013-11-26

    Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.

  5. Modelling and control issues of dynamically substructured systems: adaptive forward prediction taken as an example

    PubMed Central

    Tu, Jia-Ying; Hsiao, Wei-De; Chen, Chih-Ying

    2014-01-01

    Testing techniques of dynamically substructured systems dissects an entire engineering system into parts. Components can be tested via numerical simulation or physical experiments and run synchronously. Additional actuator systems, which interface numerical and physical parts, are required within the physical substructure. A high-quality controller, which is designed to cancel unwanted dynamics introduced by the actuators, is important in order to synchronize the numerical and physical outputs and ensure successful tests. An adaptive forward prediction (AFP) algorithm based on delay compensation concepts has been proposed to deal with substructuring control issues. Although the settling performance and numerical conditions of the AFP controller are improved using new direct-compensation and singular value decomposition methods, the experimental results show that a linear dynamics-based controller still outperforms the AFP controller. Based on experimental observations, the least-squares fitting technique, effectiveness of the AFP compensation and differences between delay and ordinary differential equations are discussed herein, in order to reflect the fundamental issues of actuator modelling in relevant literature and, more specifically, to show that the actuator and numerical substructure are heterogeneous dynamic components and should not be collectively modelled as a homogeneous delay differential equation. PMID:25104902

  6. Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians

    NASA Astrophysics Data System (ADS)

    Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von

    2008-03-01

    Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.

  7. EXPLORATION OF SIMULATION AS A RETIREMENT EDUCATION TECHNIQUE. FINAL REPORT.

    ERIC Educational Resources Information Center

    BOOCOCK, SARANE SPENCE; SPRAGUE, NORMAN

    A PILOT PROJECT EXPLORED THE ADAPTATION OF SIMULATION TECHNIQUES TO FOUR RETIREMENT PROBLEMS--FINANCIAL POSITION, PHYSICAL ENVIRONMENT (HOUSING CHOICES), HEALTH, AND SOCIAL ENVIRONMENT (PLANNING AND GAINING SKILLS BEFORE RETIREMENT). A PRELIMINARY MODEL OF A GAME IN RETIREMENT FINANCE PRESENTS PLAYERS WITH THREE INVESTMENT SITUATIONS--SAVINGS…

  8. Tumour and normal tissue radiobiology in mouse models: how close are mice to mini-humans?

    PubMed

    Koontz, Bridget F; Verhaegen, Frank; De Ruysscher, Dirk

    2017-01-01

    Animal modelling is essential to the study of radiobiology and the advancement of clinical radiation oncology by providing preclinical data. Mouse models in particular have been highly utilized in the study of both tumour and normal tissue radiobiology because of their cost effectiveness and versatility. Technology has significantly advanced in preclinical radiation techniques to allow highly conformal image-guided irradiation of small animals in an effort to mimic human treatment capabilities. However, the biological and physical limitations of animal modelling should be recognized and considered when interpreting preclinical radiotherapy (RT) studies. Murine tumour and normal tissue radioresponse has been shown to vary from human cellular and molecular pathways. Small animal irradiation techniques utilize different anatomical boundaries and may have different physical properties than human RT. This review addresses the difference between the human condition and mouse models and discusses possible strategies for future refinement of murine models of cancer and radiation for the benefit of both basic radiobiology and clinical translation.

  9. Tumour and normal tissue radiobiology in mouse models: how close are mice to mini-humans?

    PubMed Central

    Verhaegen, Frank; De Ruysscher, Dirk

    2017-01-01

    Animal modelling is essential to the study of radiobiology and the advancement of clinical radiation oncology by providing preclinical data. Mouse models in particular have been highly utilized in the study of both tumour and normal tissue radiobiology because of their cost effectiveness and versatility. Technology has significantly advanced in preclinical radiation techniques to allow highly conformal image-guided irradiation of small animals in an effort to mimic human treatment capabilities. However, the biological and physical limitations of animal modelling should be recognized and considered when interpreting preclinical radiotherapy (RT) studies. Murine tumour and normal tissue radioresponse has been shown to vary from human cellular and molecular pathways. Small animal irradiation techniques utilize different anatomical boundaries and may have different physical properties than human RT. This review addresses the difference between the human condition and mouse models and discusses possible strategies for future refinement of murine models of cancer and radiation for the benefit of both basic radiobiology and clinical translation. PMID:27612010

  10. Satellite-enhanced dynamical downscaling for the analysis of extreme events

    NASA Astrophysics Data System (ADS)

    Nunes, Ana M. B.

    2016-09-01

    The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.

  11. Survey of current situation in radiation belt modeling

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.

    2004-01-01

    The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.

  12. MERINOVA: Meteorological risks as drivers of environmental innovation in agro-ecosystem management

    NASA Astrophysics Data System (ADS)

    Gobin, Anne; Oger, Robert; Marlier, Catherine; Van De Vijver, Hans; Vandermeulen, Valerie; Van Huylenbroeck, Guido; Zamani, Sepideh; Curnel, Yannick; Mettepenningen, Evi

    2013-04-01

    The BELSPO funded project 'MERINOVA' deals with risks associated with extreme weather phenomena and with risks of biological origin such as pests and diseases. The major objectives of the proposed project are to characterise extreme meteorological events, assess the impact on Belgian agro-ecosystems, characterise their vulnerability and resilience to these events, and explore innovative adaptation options to agricultural risk management. The project comprises of five major parts that reflect the chain of risks: (i) Hazard: Assessing the likely frequency and magnitude of extreme meteorological events by means of probability density functions; (ii) Impact: Analysing the potential bio-physical and socio-economic impact of extreme weather events on agro-ecosystems in Belgium using process-based modelling techniques commensurate with the regional scale; (iii) Vulnerability: Identifying the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (iv) Risk Management: Uncovering innovative risk management and adaptation options using actor-network theory and fuzzy cognitive mapping techniques; and, (v) Communication: Communicating to research, policy and practitioner communities using web-based techniques. The different tasks of the MERINOVA project require expertise in several scientific disciplines: meteorology, statistics, spatial database management, agronomy, bio-physical impact modelling, socio-economic modelling, actor-network theory, fuzzy cognitive mapping techniques. These expertises are shared by the four scientific partners who each lead one work package. The MERINOVA project will concentrate on promoting a robust and flexible framework by demonstrating its performance across Belgian agro-ecosystems, and by ensuring its relevance to policy makers and practitioners. Impacts developed from physically based models will not only provide information on the state of the damage at any given time, but also assist in understanding the links between different factors causing damage and determining bio-physical vulnerability. Socio-economic impacts will enlarge the basis for vulnerability mapping, risk management and adaptation options. A strong expert and end-user network will be established to help disseminating and exploiting project results to meet user needs.

  13. Finite element modeling of truss structures with frequency-dependent material damping

    NASA Technical Reports Server (NTRS)

    Lesieutre, George A.

    1991-01-01

    A physically motivated modelling technique for structural dynamic analysis that accommodates frequency dependent material damping was developed. Key features of the technique are the introduction of augmenting thermodynamic fields (AFT) to interact with the usual mechanical displacement field, and the treatment of the resulting coupled governing equations using finite element analysis methods. The AFT method is fully compatible with current structural finite element analysis techniques. The method is demonstrated in the dynamic analysis of a 10-bay planar truss structure, a structure representative of those contemplated for use in future space systems.

  14. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  15. A comparison of two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund

    NASA Astrophysics Data System (ADS)

    Luks, B.; Osuch, M.; Romanowicz, R. J.

    2012-04-01

    We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.

  16. Linearized Flux Evolution (LiFE): A technique for rapidly adapting fluxes from full-physics radiative transfer models

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Crisp, David

    2018-05-01

    Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.

  17. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  18. Exploring Flavor Physics with Lattice QCD

    NASA Astrophysics Data System (ADS)

    Du, Daping; Fermilab/MILC Collaborations Collaboration

    2016-03-01

    The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.

  19. Physical principles for DNA tile self-assembly.

    PubMed

    Evans, Constantine G; Winfree, Erik

    2017-06-19

    DNA tiles provide a promising technique for assembling structures with nanoscale resolution through self-assembly by basic interactions rather than top-down assembly of individual structures. Tile systems can be programmed to grow based on logical rules, allowing for a small number of tile types to assemble large, complex assemblies that can retain nanoscale resolution. Such algorithmic systems can even assemble different structures using the same tiles, based on inputs that seed the growth. While programming and theoretical analysis of tile self-assembly often makes use of abstract logical models of growth, experimentally implemented systems are governed by nanoscale physical processes that can lead to very different behavior, more accurately modeled by taking into account the thermodynamics and kinetics of tile attachment and detachment in solution. This review discusses the relationships between more abstract and more physically realistic tile assembly models. A central concern is how consideration of model differences enables the design of tile systems that robustly exhibit the desired abstract behavior in realistic physical models and in experimental implementations. Conversely, we identify situations where self-assembly in abstract models can not be well-approximated by physically realistic models, putting constraints on physical relevance of the abstract models. To facilitate the discussion, we introduce a unified model of tile self-assembly that clarifies the relationships between several well-studied models in the literature. Throughout, we highlight open questions regarding the physical principles for DNA tile self-assembly.

  20. Modeling discourse management compared to other classroom management styles in university physics

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain Michael

    2002-01-01

    A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.

  1. High-speed holocinematographic velocimeter for studying turbulent flow control physics

    NASA Technical Reports Server (NTRS)

    Weinstein, L. M.; Beeler, G. B.; Lindemann, A. M.

    1985-01-01

    Use of a dual view, high speed, holographic movie technique is examined for studying turbulent flow control physics. This approach, which eliminates some of the limitations of previous holographic techniques, is termed a holocinematographic velocimeter (HCV). The data from this system can be used to check theoretical turbulence modeling and numerical simulations, visualize and measure coherent structures in 'non-simple' turbulent flows, and examine the mechanisms operative in various turbulent control/drag reduction concepts. This system shows promise for giving the most complete experimental characterization of turbulent flows yet available.

  2. Spatial Modeling for Resources Framework (SMRF): A modular framework for developing spatial forcing data in mountainous terrain

    NASA Astrophysics Data System (ADS)

    Havens, S.; Marks, D. G.; Kormos, P.; Hedrick, A. R.; Johnson, M.; Robertson, M.; Sandusky, M.

    2017-12-01

    In the Western US, operational water supply managers rely on statistical techniques to forecast the volume of water left to enter the reservoirs. As the climate changes and the demand increases for stored water utilized for irrigation, flood control, power generation, and ecosystem services, water managers have begun to move from statistical techniques towards using physically based models. To assist with the transition, a new open source framework was developed, the Spatial Modeling for Resources Framework (SMRF), to automate and simplify the most common forcing data distribution methods. SMRF is computationally efficient and can be implemented for both research and operational applications. Currently, SMRF is able to generate all of the forcing data required to run physically based snow or hydrologic models at 50-100 m resolution over regions of 500-10,000 km2, and has been successfully applied in real time and historical applications for the Boise River Basin in Idaho, USA, the Tuolumne River Basin and San Joaquin in California, USA, and Reynolds Creek Experimental Watershed in Idaho, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input data. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of physics-based snow and hydrologic models possible.

  3. System Simulation Modeling: A Case Study Illustration of the Model Development Life Cycle

    Treesearch

    Janice K. Wiedenbeck; D. Earl Kline

    1994-01-01

    Systems simulation modeling techniques offer a method of representing the individual elements of a manufacturing system and their interactions. By developing and experimenting with simulation models, one can obtain a better understanding of the overall physical system. Forest products industries are beginning to understand the importance of simulation modeling to help...

  4. The Effects of Model Making on Design and Learning in Landscape Architecture Education

    ERIC Educational Resources Information Center

    Duzenli, Tugba; Yilmaz, Serap; Alpak, Elif Merve

    2017-01-01

    Purpose: One of the modeling methods used in the training of all design disciplines is physical model making. This study investigates the model-making technique and emphasizes the positive effects of model-making and its utility in the academic setting in order to understand its effects on design and learning. The "Equipment Design"…

  5. Catchments as non-linear filters: evaluating data-driven approaches for spatio-temporal predictions in ungauged basins

    NASA Astrophysics Data System (ADS)

    Bellugi, D. G.; Tennant, C.; Larsen, L.

    2016-12-01

    Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.

  6. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    NASA Astrophysics Data System (ADS)

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  7. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  8. The effectiveness of CPI model to improve positive attitude toward science (PATS) for pre-service physics teacher

    NASA Astrophysics Data System (ADS)

    Sunarti, T.; Wasis; Madlazim; Suyidno; Prahani, B. K.

    2018-03-01

    In the previous research, learning material based Construction, Production, and Implementation (CPI) model has been developed to improve scientific literacy and positive attitude toward science for pre-service physics teacher. CPI model has 4 phases, included: 1) Motivation; 2) Construction (Cycle I); 3) Production (Cycle II); and 4) Evaluation. This research is aimed to analyze the effectiveness of CPI model towards the improvement Positive Attitude toward Science (PATS) for pre-service physics teacher. This research used one group pre-test and post-test design on 160 pre-service physics teacher divided into 4 groups at Lambung Mangkurat University and Surabaya State University (Indonesia), academic year 2016/2017. Data collection was conducted through questioner, observation, and interview. Positive attitude toward science for pre-service physics teacher measurement were conducted through Positive Attitude toward Science Evaluation Sheet (PATSES). The data analysis technique was done by using Wilcoxon test and n-gain. The results showed that there was a significant increase in positive attitude toward science for pre-service physics teacher at α = 5%, with n-gain average of high category. Thus, the CPI model is effective for improving positive attitude toward science for pre-service physics teacher.

  9. Apps to promote physical activity among adults: a review and content analysis

    PubMed Central

    2014-01-01

    Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981

  10. Physical Modeling in the Geological Sciences: An Annotated Bibliography. CEGS Programs Publication No. 16.

    ERIC Educational Resources Information Center

    Charlesworth, L. J., Jr.; Passero, Richard Nicholas

    The bibliography identifies, describes, and evaluates devices and techniques discussed in the world's literature to demonstrate or stimulate natural physical geologic phenomena in classroom or laboratory teaching or research situations. The aparatus involved ranges from the very simple and elementary to the highly complex, sophisticated, and…

  11. Physical Vapor Deposition of Thin Films

    NASA Astrophysics Data System (ADS)

    Mahan, John E.

    2000-01-01

    A unified treatment of the theories, data, and technologies underlying physical vapor deposition methods With electronic, optical, and magnetic coating technologies increasingly dominating manufacturing in the high-tech industries, there is a growing need for expertise in physical vapor deposition of thin films. This important new work provides researchers and engineers in this field with the information they need to tackle thin film processes in the real world. Presenting a cohesive, thoroughly developed treatment of both fundamental and applied topics, Physical Vapor Deposition of Thin Films incorporates many critical results from across the literature as it imparts a working knowledge of a variety of present-day techniques. Numerous worked examples, extensive references, and more than 100 illustrations and photographs accompany coverage of: * Thermal evaporation, sputtering, and pulsed laser deposition techniques * Key theories and phenomena, including the kinetic theory of gases, adsorption and condensation, high-vacuum pumping dynamics, and sputtering discharges * Trends in sputter yield data and a new simplified collisional model of sputter yield for pure element targets * Quantitative models for film deposition rate, thickness profiles, and thermalization of the sputtered beam

  12. A skeleton family generator via physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2009-01-01

    This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.

  13. Testing a path-analytic mediation model of how motivational enhancement physiotherapy improves physical functioning in pain patients.

    PubMed

    Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn

    2014-12-01

    Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.

  14. Physics faculty beliefs and values about the teaching and learning of problem solving. II. Procedures for measurement and analysis

    NASA Astrophysics Data System (ADS)

    Henderson, Charles; Yerushalmi, Edit; Kuo, Vince H.; Heller, Kenneth; Heller, Patricia

    2007-12-01

    To identify and describe the basis upon which instructors make curricular and pedagogical decisions, we have developed an artifact-based interview and an analysis technique based on multilayered concept maps. The policy capturing technique used in the interview asks instructors to make judgments about concrete instructional artifacts similar to those they likely encounter in their teaching environment. The analysis procedure alternatively employs both an a priori systems view analysis and an emergent categorization to construct a multilayered concept map, which is a hierarchically arranged set of concept maps where child maps include more details than parent maps. Although our goal was to develop a model of physics faculty beliefs about the teaching and learning of problem solving in the context of an introductory calculus-based physics course, the techniques described here are applicable to a variety of situations in which instructors make decisions that influence teaching and learning.

  15. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  16. Leaving No Stone Unturned in the Pursuit of New Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Timothy

    The major goal of this project was to investigate a variety of topics in theoretical particle physics, with an emphasis on beyond the Standard Model phenomena. A particular emphasis is placed on making a connection to ongoing experimental efforts designed to extend our knowledge of the fundamental physics frontiers. The principal investigator aimed to play a leading role in theoretical research that complements this impressive experimental endeavor. Progress requires a strong synergy between the theoretical and experimental communities to design and interpret the data that is produced. Thus, this project's main goal was to improve our understanding of models, signatures,more » and techniques as we continue the hunt for new physics.« less

  17. Polyenergetic known-component reconstruction without prior shape models

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Zbijewski, W.; Zhang, X.; Xu, S.; Stayman, J. W.

    2017-03-01

    Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.

  18. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  19. Exploring New Pathways in Precipitation Assimilation

    NASA Technical Reports Server (NTRS)

    Hou, Arthur; Zhang, Sara Q.

    2004-01-01

    Precipitation assimilation poses a special challenge in that the forward model for rain in a global forecast system is based on parameterized physics, which can have large systematic errors that must be rectified to use precipitation data effectively within a standard statistical analysis framework. We examine some key issues in precipitation assimilation and describe several exploratory studies in assimilating rainfall and latent heating information in NASA's global data assimilation systems using the forecast model as a weak constraint. We present results from two research activities. The first is the assimilation of surface rainfall data using a time-continuous variational assimilation based on a column model of the full moist physics. The second is the assimilation of convective and stratiform latent heating retrievals from microwave sensors using a variational technique with physical parameters in the moist physics schemes as a control variable. We will show the impact of assimilating these data on analyses and forecasts. Among the lessons learned are (1) that the time-continuous application of moisture/temperature tendency corrections to mitigate model deficiencies offers an effective strategy for assimilating precipitation information, and (2) that the model prognostic variables must be allowed to directly respond to an improved rain and latent heating field within an analysis cycle to reap the full benefit of assimilating precipitation information. of microwave radiances versus retrieval information in raining areas, and initial efforts in developing ensemble techniques such as Kalman filter/smoother for precipitation assimilation. Looking to the future, we discuss new research directions including the assimilation

  20. The effectiveness of CCDSR learning model to improve skills of creating lesson plan and worksheet science process skill (SPS) for pre-service physics teacher

    NASA Astrophysics Data System (ADS)

    Limatahu, I.; Sutoyo, S.; Wasis; Prahani, B. K.

    2018-03-01

    In the previous research, CCDSR (Condition, Construction, Development, Simulation, and Reflection) learning model has been developed to improve science process skills for pre-service physics teacher. This research is aimed to analyze the effectiveness of CCDSR learning model towards the improvement skills of creating lesson plan and worksheet of Science Process Skill (SPS) for pre-service physics teacher in academic year 2016/2017. This research used one group pre-test and post-test design on 12 pre-service physics teacher at Physics Education, University of Khairun. Data collection was conducted through test and observation. Creating lesson plan and worksheet SPS skills of pre-service physics teacher measurement were conducted through Science Process Skill Evaluation Sheet (SPSES). The data analysis technique was done by Wilcoxon t-test and n-gain. The CCDSR learning model consists of 5 phases, including (1) Condition, (2) Construction, (3) Development, (4) Simulation, and (5) Reflection. The results showed that there was a significant increase in creating lesson plan and worksheet SPS skills of pre-service physics teacher at α = 5% and n-gain average of moderate category. Thus, the CCDSR learning model is effective for improving skills of creating lesson plan and worksheet SPS for pre-service physics teacher.

  1. Extracting physical chemistry from mechanics: a new approach to investigate DNA interactions with drugs and proteins in single molecule experiments.

    PubMed

    Rocha, M S

    2015-09-01

    In this review we focus on the idea of establishing connections between the mechanical properties of DNA-ligand complexes and the physical chemistry of DNA-ligand interactions. This type of connection is interesting because it opens the possibility of performing a robust characterization of such interactions by using only one experimental technique: single molecule stretching. Furthermore, it also opens new possibilities in comparing results obtained by very different approaches, in particular when comparing single molecule techniques to ensemble-averaging techniques. We start the manuscript reviewing important concepts of DNA mechanics, from the basic mechanical properties to the Worm-Like Chain model. Next we review the basic concepts of the physical chemistry of DNA-ligand interactions, revisiting the most important models used to analyze the binding data and discussing their binding isotherms. Then, we discuss the basic features of the single molecule techniques most used to stretch DNA-ligand complexes and to obtain "force × extension" data, from which the mechanical properties of the complexes can be determined. We also discuss the characteristics of the main types of interactions that can occur between DNA and ligands, from covalent binding to simple electrostatic driven interactions. Finally, we present a historical survey of the attempts to connect mechanics to physical chemistry for DNA-ligand systems, emphasizing a recently developed fitting approach useful to connect the persistence length of DNA-ligand complexes to the physicochemical properties of the interaction. Such an approach in principle can be used for any type of ligand, from drugs to proteins, even if multiple binding modes are present.

  2. Molecular-Based Optical Measurement Techniques for Transition and Turbulence in High-Speed Flow

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Cutler, Andrew D.

    2013-01-01

    High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and photogrammetry (for model attitude and deformation measurement) are excluded to limit the scope of this report. Other physical probes such as heat flux gauges, total temperature probes are also excluded. We further exclude measurement techniques that require particle seeding though particle based methods may still be useful in many high speed flow applications. This manuscript details some of the more widely used molecular-based measurement techniques for studying transition and turbulence: laser-induced fluorescence (LIF), Rayleigh and Raman Scattering and coherent anti-Stokes Raman scattering (CARS). These techniques are emphasized, in part, because of the prior experience of the authors. Additional molecular based techniques are described, albeit in less detail. Where possible, an effort is made to compare the relative advantages and disadvantages of the various measurement techniques, although these comparisons can be subjective views of the authors. Finally, the manuscript concludes by evaluating the different measurement techniques in view of the precision requirements described in this chapter. Additional requirements and considerations are discussed to assist with choosing an optical measurement technique for a given application.

  3. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  4. Contemporary machine learning: techniques for practitioners in the physical sciences

    NASA Astrophysics Data System (ADS)

    Spears, Brian

    2017-10-01

    Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    NASA Astrophysics Data System (ADS)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  6. Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.

    PubMed

    Ahn, Hyung Soo; DiAngelo, Denis J

    2007-05-15

    This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.

  7. Channelling information flows from observation to decision; or how to increase certainty

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.

    2015-12-01

    To make adequate decisions in an uncertain world, information needs to reach the decision problem, to enable overseeing the full consequences of each possible decision.On its way from the physical world to a decision problem, information is transferred through the physical processes that influence the sensor, then through processes that happen in the sensor, through wires or electromagnetic waves. For the last decade, most information becomes digitized at some point. From moment of digitization, information can in principle be transferred losslessly. Information about the physical world is often also stored, sometimes in compressed form, such as physical laws, concepts, or models of specific hydrological systems. It is important to note, however, that all information about a physical system eventually has to originate from observation (although inevitably coloured by some prior assumptions). This colouring makes the compression lossy, but is effectively the only way to make use of similarities in time and space that enable predictions while measuring only a a few macro-states of a complex hydrological system.Adding physical process knowledge to a hydrological model can thus be seen as a convenient way to transfer information from observations from a different time or place, to make predictions about another situation, assuming the same dynamics are at work.The key challenge to achieve more certainty in hydrological prediction can therefore be formulated as a challenge to tap and channel information flows from the environment. For tapping more information flows, new measurement techniques, large scale campaigns, historical data sets, and large sample hydrology and regionalization efforts can bring progress. For channelling the information flows with minimum loss, model calibration, and model formulation techniques should be critically investigated. Some experience from research in a Swiss high alpine catchment are used as an illustration.

  8. UW Imaging of Seismic-Physical-Models in Air Using Fiber-Optic Fabry-Perot Interferometer.

    PubMed

    Rong, Qiangzhou; Hao, Yongxin; Zhou, Ruixiang; Yin, Xunli; Shao, Zhihua; Liang, Lei; Qiao, Xueguang

    2017-02-17

    A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe performs with an excellent UW sensitivity thanks to the nanolayer gold film, and thus is capable of detecting a weak UW in air medium. Furthermore, the compact sensor is a symmetrical structure so that it presents a good directionality in the UW detection. The spectral band-side filter technique is used for UW interrogation. After scanning the models using the sensing probe in air, the two-dimensional (2D) images of four physical models are reconstructed.

  9. The effectiveness of Concept Mapping Content Representation Lesson Study (ComCoReLS) model to improve skills of Creating Physics Lesson Plan (CPLP) for pre-service physics teacher

    NASA Astrophysics Data System (ADS)

    Purwaningsih, E.; Sutoyo, S.; Wasis; Prahani, B. K.

    2018-03-01

    This research is aimed to analyse the effectiveness of ComCoReLS (Concept Mapping Content Representation Lesson Study) model towards the improvement skills of Creating Physics Lesson Plan (CPLP) for pre-service physics teacher. This research used one group pre-test and post-test design on 12 pre-service physics teacher at University of Malang State (Indonesia) in academic year 2016/2017. Data collection was conducted through test and interview. Skills of creating physics lesson plan for pre-service physics teacher measurement were conducted through Physics Lesson Plan Evaluation Sheet (PLPES). The data analysis technique was done by using paired t-test and n-gain. The CoMCoReLS model consists of 5 phases, including (1) Preparation, (2) Coaching, (3) Guided Practice, (4) Independent Practice, and (5) Evaluation. In the first, second, third and fifth phases are done at University of Malang State, while the fourth phase (Independent Practice) is done in SMAN 1 Singosari, SMAN 2 Malang, SMA Lab UM, MAN 3 Malang. The results showed that there was a significant increase in skills of creating physics lesson plan for pre-service physics teacher at α = 5% and n-gain average of high category. Thus, the ComCoReLS model is effective for improving skills of creating physics lesson plan for pre-service physics teacher.

  10. Compact lumped circuit model of discharges in DC accelerator using partial element equivalent circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.

    2014-07-01

    DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less

  11. Matrix Solution of Coupled Differential Equations and Looped Car Following Models

    ERIC Educational Resources Information Center

    McCartney, Mark

    2008-01-01

    A simple mathematical model for the behaviour of how vehicles follow each other along a looped stretch of road is described. The resulting coupled first order differential equations are solved using appropriate matrix techniques and the physical significance of the model is discussed. A number possible classroom exercises are suggested to help…

  12. Design of high-fidelity haptic display for one-dimensional force reflection applications

    NASA Astrophysics Data System (ADS)

    Gillespie, Brent; Rosenberg, Louis B.

    1995-12-01

    This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.

  13. A Modal Approach to Compact MIMO Antenna Design

    NASA Astrophysics Data System (ADS)

    Yang, Binbin

    MIMO (Multiple-Input Multiple-Output) technology offers new possibilities for wireless communication through transmission over multiple spatial channels, and enables linear increases in spectral efficiency as the number of the transmitting and receiving antennas increases. However, the physical implementation of such systems in compact devices encounters many physical constraints mainly from the design of multi-antennas. First, an antenna's bandwidth decreases dramatically as its electrical size reduces, a fact known as antenna Q limit; secondly, multiple antennas closely spaced tend to couple with each other, undermining MIMO performance. Though different MIMO antenna designs have been proposed in the literature, there is still a lack of a systematic design methodology and knowledge of performance limits. In this dissertation, we employ characteristic mode theory (CMT) as a powerful tool for MIMO antenna analysis and design. CMT allows us to examine each physical mode of the antenna aperture, and to access its many physical parameters without even exciting the antenna. For the first time, we propose efficient circuit models for MIMO antennas of arbitrary geometry using this modal decomposition technique. Those circuit models demonstrate the powerful physical insight of CMT for MIMO antenna modeling, and simplify MIMO antenna design problem to just the design of specific antenna structural modes and a modal feed network, making possible the separate design of antenna aperture and feeds. We therefore develop a feed-independent shape synthesis technique for optimization of broadband multi-mode apertures. Combining the shape synthesis and circuit modeling techniques for MIMO antennas, we propose a shape-first feed-next design methodology for MIMO antennas, and designed and fabricated two planar MIMO antennas, each occupying an aperture much smaller than the regular size of lambda/2 x lambda/2. Facilitated by the newly developed source formulation for antenna stored energy and recently reported work on antenna Q factor minimization, we extend the minimum Q limit to antennas of arbitrary geometry, and show that given an antenna aperture, any antenna design based on its substructure will result into minimum Q factors larger than or equal to that of the complete structure. This limit is much tighter than Chu's limit based on spherical modes, and applies to antennas of arbitrary geometry. Finally, considering the almost inevitable presence of mutual coupling effects within compact multiport antennas, we develop new decoupling networks (DN) and decoupling network synthesis techniques. An information-theoretic metric, information mismatch loss (Gammainfo), is defined for DN characterization. Based on this metric, the optimization of decoupling networks for broadband system performance is conducted, which demonstrates the limitation of the single-frequency decoupling techniques and room for improvement.

  14. Modification of Gaussian mixture models for data classification in high energy physics

    NASA Astrophysics Data System (ADS)

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  15. MT+, integrating magnetotellurics to determine earth structure, physical state, and processes

    USGS Publications Warehouse

    Bedrosian, P.A.

    2007-01-01

    As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.

  16. The Serious Use of Play and Metaphor: Legos and Labyrinths

    ERIC Educational Resources Information Center

    James, Alison; Brookfield, Stephen

    2013-01-01

    In this paper the authors wish to examine kinesthetic forms of learning involving the body and the physical realm. The authors look at two particular techniques; using Legos to build metaphorical models and living the physical experience of metaphors in the shape of labyrinth-walking and its attendant activities. The authors begin by discussing…

  17. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  18. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  19. Underground Mathematics

    ERIC Educational Resources Information Center

    Hadlock, Charles R

    2013-01-01

    The movement of groundwater in underground aquifers is an ideal physical example of many important themes in mathematical modeling, ranging from general principles (like Occam's Razor) to specific techniques (such as geometry, linear equations, and the calculus). This article gives a self-contained introduction to groundwater modeling with…

  20. Teaching Letter Formation.

    ERIC Educational Resources Information Center

    Graham, Steve; Madan, Avi J.

    1981-01-01

    The authors describe a remedial technique for teaching letter formation to students with handwriting difficulties. The approach blends traditional procedures (modeling, physical prompts, tracing, self correction, etc.) with cognitive behavior modification principles. (CL)

  1. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  2. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  3. Perceived sports competence mediates the relationship between childhood motor skill proficiency and adolescent physical activity and fitness: a longitudinal assessment.

    PubMed

    Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R

    2008-08-08

    The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.

  4. String theory--the physics of string-bending and other electric guitar techniques.

    PubMed

    Grimes, David Robert

    2014-01-01

    Electric guitar playing is ubiquitous in practically all modern music genres. In the hands of an experienced player, electric guitars can sound as expressive and distinct as a human voice. Unlike other more quantised instruments where pitch is a discrete function, guitarists can incorporate micro-tonality and, as a result, vibrato and sting-bending are idiosyncratic hallmarks of a player. Similarly, a wide variety of techniques unique to the electric guitar have emerged. While the mechano-acoustics of stringed instruments and vibrating strings are well studied, there has been comparatively little work dedicated to the underlying physics of unique electric guitar techniques and strings, nor the mechanical factors influencing vibrato, string-bending, fretting force and whammy-bar dynamics. In this work, models for these processes are derived and the implications for guitar and string design discussed. The string-bending model is experimentally validated using a variety of strings and vibrato dynamics are simulated. The implications of these findings on the configuration and design of guitars is also discussed.

  5. String Theory - The Physics of String-Bending and Other Electric Guitar Techniques

    PubMed Central

    Grimes, David Robert

    2014-01-01

    Electric guitar playing is ubiquitous in practically all modern music genres. In the hands of an experienced player, electric guitars can sound as expressive and distinct as a human voice. Unlike other more quantised instruments where pitch is a discrete function, guitarists can incorporate micro-tonality and, as a result, vibrato and sting-bending are idiosyncratic hallmarks of a player. Similarly, a wide variety of techniques unique to the electric guitar have emerged. While the mechano-acoustics of stringed instruments and vibrating strings are well studied, there has been comparatively little work dedicated to the underlying physics of unique electric guitar techniques and strings, nor the mechanical factors influencing vibrato, string-bending, fretting force and whammy-bar dynamics. In this work, models for these processes are derived and the implications for guitar and string design discussed. The string-bending model is experimentally validated using a variety of strings and vibrato dynamics are simulated. The implications of these findings on the configuration and design of guitars is also discussed. PMID:25054880

  6. Model-Based Detection of Radioactive Contraband for Harbor Defense Incorporating Compton Scattering Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V; Chambers, D H; Breitfeller, E F

    2010-03-02

    The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less

  7. “In vitro” Implantation Technique Based on 3D Printed Prosthetic Prototypes

    NASA Astrophysics Data System (ADS)

    Tarnita, D.; Boborelu, C.; Geonea, I.; Malciu, R.; Grigorie, L.; Tarnita, D. N.

    2018-06-01

    In this paper, Rapid Prototyping ZCorp 310 system, based on high-performance composite powder and on resin-high strength infiltration system and three-dimensional printing as a manufacturing method are used to obtain physical prototypes of orthopaedic implants and prototypes of complex functional prosthetic systems directly from the 3D CAD data. These prototypes are useful for in vitro experimental tests and measurements to optimize and obtain final physical prototypes. Using a new elbow prosthesis model prototype obtained by 3D printing, the surgical technique of implantation is established. Surgical implantation was performed on male corpse elbow joint.

  8. A model for teaching and learning spinal thrust manipulation and its effect on participant confidence in technique performance.

    PubMed

    Wise, Christopher H; Schenk, Ronald J; Lattanzi, Jill Black

    2016-07-01

    Despite emerging evidence to support the use of high velocity thrust manipulation in the management of lumbar spinal conditions, utilization of thrust manipulation among clinicians remains relatively low. One reason for the underutilization of these procedures may be related to disparity in training in the performance of these techniques at the professional and post professional levels. To assess the effect of using a new model of active learning on participant confidence in the performance of spinal thrust manipulation and the implications for its use in the professional and post-professional training of physical therapists. A cohort of 15 DPT students in their final semester of entry-level professional training participated in an active training session emphasizing a sequential partial task practice (SPTP) strategy in which participants engaged in partial task practice over several repetitions with different partners. Participants' level of confidence in the performance of these techniques was determined through comparison of pre- and post-training session surveys and a post-session open-ended interview. The increase in scores across all items of the individual pre- and post-session surveys suggests that this model was effective in changing overall participant perception regarding the effectiveness and safety of these techniques and in increasing student confidence in their performance. Interviews revealed that participants greatly preferred the SPTP strategy, which enhanced their confidence in technique performance. Results indicate that this new model of psychomotor training may be effective at improving confidence in the performance of spinal thrust manipulation and, subsequently, may be useful for encouraging the future use of these techniques in the care of individuals with impairments of the spine. Inasmuch, this method of instruction may be useful for training of physical therapists at both the professional and post-professional levels.

  9. Use of Advanced Spectroscopic Techniques for Predicting the Mechanical Properties of Wood Composites

    Treesearch

    Timothy G. Rials; Stephen S. Kelley; Chi-Leung So

    2002-01-01

    Near infrared (NIR) spectroscopy was used to characterize a set of medium-density fiberboard (MDF) samples. This spectroscopic technique, in combination with projection to latent structures (PLS) modeling, effectively predicted the mechanical strength of MDF samples with a wide range of physical properties. The stiffness, strength, and internal bond properties of the...

  10. Flipping the Physical Examination: Web-Based Instruction and Live Assessment of Bedside Technique.

    PubMed

    Williams, Dustyn E; Thornton, John W

    2016-01-01

    The skill of physicians teaching the physical examination skill has decreased, with newer faculty underperforming compared to their seniors. Improved methods of instruction with an emphasis on physical examinations are necessary to both improve the quality of medical education and alleviate the teaching burden of faculty physicians. We developed a curriculum that combines web-based instruction with real-life practice and features individualized feedback. This innovative medical education model should allow the physical examination to be taught and assessed in an effective manner. The model is under study at Baton Rouge General Medical Center. Our goals are to limit faculty burden, maximize student involvement as learners and evaluators, and effectively develop students' critical skills in performing bedside assessments.

  11. Investigation of model-based physical design restrictions (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl

    2005-05-01

    As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.

  12. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  13. Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Lee, Sangyup

    2015-06-01

    The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  14. Modeling and simulation of dust behaviors behind a moving vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jingfang

    Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.

  15. Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, J.R.; et al.

    This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.

  16. Development and comparison of projection and image space 3D nodule insertion techniques

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  17. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  18. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  19. A New Predictive Model of Centerline Segregation in Continuous Cast Steel Slabs by Using Multivariate Adaptive Regression Splines Approach

    PubMed Central

    García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María

    2015-01-01

    The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.

  20. Calibrating White Dwarf Asteroseismic Fitting Techniques

    NASA Astrophysics Data System (ADS)

    Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.

    2017-03-01

    The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.

  1. Redundancy management of electrohydraulic servoactuators by mathematical model referencing

    NASA Technical Reports Server (NTRS)

    Campbell, R. A.

    1971-01-01

    A description of a mathematical model reference system is presented which provides redundancy management for an electrohydraulic servoactuator. The mathematical model includes a compensation network that calculates reference parameter perturbations induced by external disturbance forces. This is accomplished by using the measured pressure differential data taken from the physical system. This technique was experimentally verified by tests performed using the H-1 engine thrust vector control system for Saturn IB. The results of these tests are included in this report. It was concluded that this technique improves the tracking accuracy of the model reference system to the extent that redundancy management of electrohydraulic servosystems may be performed using this method.

  2. An integrated model to measure service management and physical constraints' effect on food consumption in assisted-living facilities.

    PubMed

    Huang, Hui-Chun; Shanklin, Carol W

    2008-05-01

    The United States is experiencing remarkable growth in the elderly population, which provides both opportunities and challenges for assisted-living facilities. The objective of this study was to explore how service management influences residents' actual food consumption in assisted-living facilities. Physical factors influencing residents' service evaluation and food consumption also were investigated. A total of 394 questionnaires were distributed to assisted-living residents in seven randomly selected facilities. The questionnaire was developed based on an in-depth literature review and pilot study. Residents' perceived quality evaluations, satisfaction, and physical constraints were measured. Residents' actual food consumption was measured using a plate waste technique. A total of 118 residents in five facilities completed both questionnaires and food consumption assessments. Descriptive, multivariate analyses and structural equation modeling techniques were employed. Service management, including food and service quality and customer satisfaction, was found to significantly influence residents' food consumption. Physical constraints associated with aging, including a decline in health status, chewing problems, sensory loss, and functional disability, also significantly influenced residents' food consumption. A significant relationship was found between physical constraints and customer satisfaction. Foodservice that provides good food and service quality increases customer satisfaction and affects residents' actual food consumption. Physical constraints also influence residents' food consumption directly, or indirectly through satisfaction. The findings suggest that food and nutrition professionals in assisted-living should consider the physical profiles of their residents to enhance residents' satisfaction and nutrient intake. Recommendations for exploring residents' perspectives are discussed.

  3. Fast Inference of Deep Neural Networks in FPGAs for Particle Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Javier; Han, Song; Harris, Philip

    Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which wouldmore » enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.« less

  4. Rural Infant Stimulation Environment (RISE). Progress Report, July 1, 1974 to June 30, 1977. Handicapped Children's Early Education Program.

    ERIC Educational Resources Information Center

    Holder, Loreta

    This final report describes a federally-funded project that was designed to provide a model for service delivery to severely physically involved infants and their families living in the highly rural area of West Alabama. The project developed and refined an eclectic treatment approach known as Developmental Physical Management Techniques (DPMT).…

  5. Variable thickness transient ground-water flow model. Volume 1. Formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented.

  6. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 1 EXPOSURE MODELING

    EPA Science Inventory

    Exposure to contaminants originating in the domestic water supply is influenced by a number of factors, including human activities, water use behavior, and physical and chemical processes. The key role of human activities is very apparent in exposure related to volatile water-...

  7. Predictive Modeling of Polymer Mechanical Behavior Coupled to Chemical Change/ Technique Development for Measuring Polymer Physical Aging.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kropka, Jamie Michael; Stavig, Mark E.; Arechederra, Gabe Kenneth

    Develop an understanding of the evolution of glassy polymer mechanical response during aging and the mechanisms associated with that evolution. That understanding will be used to develop constitutive models to assess the impact of stress evolution in encapsulants on NW designs.

  8. Systems Engineering of Education I: The Evolution of Systems Thinking in Education, 2nd Edition.

    ERIC Educational Resources Information Center

    Silvern, Leonard C.

    This document methodically traces the development of the fundamental concepts of systems thinking in education from Harbert to contemporary innovators. The discussion explains narrative models, concentrating on educational flowcharting techniques and mathematical models related to developments in engineering and physical science. The presentation…

  9. A Mode-Shape-Based Fault Detection Methodology for Cantilever Beams

    NASA Technical Reports Server (NTRS)

    Tejada, Arturo

    2009-01-01

    An important goal of NASA's Internal Vehicle Health Management program (IVHM) is to develop and verify methods and technologies for fault detection in critical airframe structures. A particularly promising new technology under development at NASA Langley Research Center is distributed Bragg fiber optic strain sensors. These sensors can be embedded in, for instance, aircraft wings to continuously monitor surface strain during flight. Strain information can then be used in conjunction with well-known vibrational techniques to detect faults due to changes in the wing's physical parameters or to the presence of incipient cracks. To verify the benefits of this technology, the Formal Methods Group at NASA LaRC has proposed the use of formal verification tools such as PVS. The verification process, however, requires knowledge of the physics and mathematics of the vibrational techniques and a clear understanding of the particular fault detection methodology. This report presents a succinct review of the physical principles behind the modeling of vibrating structures such as cantilever beams (the natural model of a wing). It also reviews two different classes of fault detection techniques and proposes a particular detection method for cracks in wings, which is amenable to formal verification. A prototype implementation of these methods using Matlab scripts is also described and is related to the fundamental theoretical concepts.

  10. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  11. Stochastic modelling of temperatures affecting the in situ performance of a solar-assisted heat pump: The multivariate approach and physical interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loveday, D.L.; Craggs, C.

    Box-Jenkins-based multivariate stochastic modeling is carried out using data recorded from a domestic heating system. The system comprises an air-source heat pump sited in the roof space of a house, solar assistance being provided by the conventional tile roof acting as a radiation absorber. Multivariate models are presented which illustrate the time-dependent relationships between three air temperatures - at external ambient, at entry to, and at exit from, the heat pump evaporator. Using a deterministic modeling approach, physical interpretations are placed on the results of the multivariate technique. It is concluded that the multivariate Box-Jenkins approach is a suitable techniquemore » for building thermal analysis. Application to multivariate Box-Jenkins approach is a suitable technique for building thermal analysis. Application to multivariate model-based control is discussed, with particular reference to building energy management systems. It is further concluded that stochastic modeling of data drawn from a short monitoring period offers a means of retrofitting an advanced model-based control system in existing buildings, which could be used to optimize energy savings. An approach to system simulation is suggested.« less

  12. Physical interpretation and application of principles of ultrasonic nondestructive evaluation of high-performance materials

    NASA Technical Reports Server (NTRS)

    Miller, James G.

    1990-01-01

    An ultrasonic measurement system employed in the experimental interrogation of the anisotropic properties (through the measurement of the elastic stiffness constants) of the uniaxial graphite-epoxy composites is presented. The continuing effort for the development of improved visualization techniques for physical parameters is discussed. The background is set for the understanding and visualization of the relationship between the phase and energy/group velocity for propagation in high-performance anisotropic materials by investigating the general requirements imposed by the classical wave equation. The consequences are considered when the physical parameters of the anisotropic material are inserted into the classical wave equation by a linear elastic model. The relationship is described between the phase velocity and the energy/group velocity three dimensional surfaces through graphical techniques.

  13. A Study of the dimensional accuracy obtained by low cost 3D printing for possible application in medicine

    NASA Astrophysics Data System (ADS)

    Kitsakis, K.; Alabey, P.; Kechagias, J.; Vaxevanidis, N.

    2016-11-01

    Low cost 3D printing' is a terminology that referred to the fused filament fabrication (FFF) technique, which constructs physical prototypes, by depositing material layer by layer using a thermal nozzle head. Nowadays, 3D printing is widely used in medical applications such as tissue engineering as well as supporting tool in diagnosis and treatment in Neurosurgery, Orthopedic and Dental-Cranio-Maxillo-Facial surgery. 3D CAD medical models are usually obtained by MRI or CT scans and then are sent to a 3D printer for physical model creation. The present paper is focused on a brief overview of benefits and limitations of 3D printing applications in the field of medicine as well as on a dimensional accuracy study of low-cost 3D printing technique.

  14. Low temperature recombination and trapping analysis in high purity gallium arsenide by microwave photodielectric techniques

    NASA Technical Reports Server (NTRS)

    Khambaty, M. B.; Hartwig, W. H.

    1972-01-01

    Some physical theories pertinent to the measurement properties of gallium arsenide are presented and experimental data are analyzed. A model for explaining recombination and trapping high purity gallium arsenide, valid below 77 K is assembled from points made at various places and an appraisal is given of photodielectric techniques for material property studies.

  15. High-Performance Computational Modeling of ICRF Physics and Plasma-Surface Interactions in Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Smithe, David

    2016-10-01

    Inefficiencies and detrimental physical effects may arise in conjunction with ICRF heating of tokamak plasmas. Large wall potential drops, associated with sheath formation near plasma-facing antenna hardware, give rise to high-Z impurity sputtering from plasma-facing components and subsequent radiative cooling. Linear and nonlinear wave excitations in the plasma edge/SOL also dissipate injected RF power and reduce overall antenna efficiency. Recent advances in finite-difference time-domain (FDTD) modeling techniques allow the physics of localized sheath potentials, and associated sputtering events, to be modeled concurrently with the physics of antenna near- and far-field behavior and RF power flow. The new methods enable time-domain modeling of plasma-surface interactions and ICRF physics in realistic experimental configurations at unprecedented spatial resolution. We present results/animations from high-performance (10k-100k core) FDTD/PIC simulations spanning half of Alcator C-Mod at mm-scale resolution, exploring impurity production due to localized sputtering (in response to self-consistent sheath potentials at antenna surfaces) and the physics of parasitic slow wave excitation near the antenna hardware and SOL. Supported by US DoE (Award DE-SC0009501) and the ALCC program.

  16. Neuroimaging Techniques: a Conceptual Overview of Physical Principles, Contribution and History

    NASA Astrophysics Data System (ADS)

    Minati, Ludovico

    2006-06-01

    This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Given the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.

  17. Physics of the inner heliosphere 1-10R sub O plasma diagnostics and models

    NASA Technical Reports Server (NTRS)

    Withbroe, G. L.

    1984-01-01

    The physics of solar wind flow in the acceleration region and impulsive phenomena in the solar corona is studied. The study of magnetohydrodynamic wave propagation in the corona and the solutions for steady state and time dependent solar wind equations gives insights concerning the physics of the solar wind acceleration region, plasma heating and plasma acceleration processes and the formation of shocks. Also studied is the development of techniques for placing constraints on the mechanisms responsible for coronal heating.

  18. Program listing for the REEDM (Rocket Exhaust Effluent Diffusion Model) computer program

    NASA Technical Reports Server (NTRS)

    Bjorklund, J. R.; Dumbauld, R. K.; Cheney, C. S.; Geary, H. V.

    1982-01-01

    The program listing for the REEDM Computer Program is provided. A mathematical description of the atmospheric dispersion models, cloud-rise models, and other formulas used in the REEDM model; vehicle and source parameters, other pertinent physical properties of the rocket exhaust cloud and meteorological layering techniques; user's instructions for the REEDM computer program; and worked example problems are contained in NASA CR-3646.

  19. Applying neutron transmission physics and 3D statistical full-field model to understand 2D Bragg-edge imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Qingge; Song, Gian; Gorti, Sarma B.

    Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less

  20. Applying neutron transmission physics and 3D statistical full-field model to understand 2D Bragg-edge imaging

    DOE PAGES

    Xie, Qingge; Song, Gian; Gorti, Sarma B.; ...

    2018-02-21

    Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less

  1. Assessing physical activity using wearable monitors: measures of physical activity.

    PubMed

    Butte, Nancy F; Ekelund, Ulf; Westerterp, Klaas R

    2012-01-01

    Physical activity may be defined broadly as "all bodily actions produced by the contraction of skeletal muscle that increase energy expenditure above basal level." Physical activity is a complex construct that can be classified into major categories qualitatively, quantitatively, or contextually. The quantitative assessment of physical activity using wearable monitors is grounded in the measurement of energy expenditure. Six main categories of wearable monitors are currently available to investigators: pedometers, load transducers/foot-contact monitors, accelerometers, HR monitors, combined accelerometer and HR monitors, and multiple sensor systems. Currently available monitors are capable of measuring total physical activity as well as components of physical activity that play important roles in human health. The selection of wearable monitors for measuring physical activity will depend on the physical activity component of interest, study objectives, characteristics of the target population, and study feasibility in terms of cost and logistics. Future development of sensors and analytical techniques for assessing physical activity should focus on the dynamic ranges of sensors, comparability for sensor output across manufacturers, and the application of advanced modeling techniques to predict energy expenditure and classify physical activities. New approaches for qualitatively classifying physical activity should be validated using direct observation or recording. New sensors and methods for quantitatively assessing physical activity should be validated in laboratory and free-living populations using criterion methods of calorimetry or doubly labeled water.

  2. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  3. Materials used to simulate physical properties of human skin.

    PubMed

    Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M

    2016-02-01

    For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. DAMIT: a database of asteroid models

    NASA Astrophysics Data System (ADS)

    Durech, J.; Sidorin, V.; Kaasalainen, M.

    2010-04-01

    Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).

  5. Automated Systematic Generation and Exploration of Flat Direction Phenomenology in Free Fermionic Heterotic String Theory

    NASA Astrophysics Data System (ADS)

    Greenwald, Jared

    Any good physical theory must resolve current experimental data as well as offer predictions for potential searches in the future. The Standard Model of particle physics, Grand Unied Theories, Minimal Supersymmetric Models and Supergravity are all attempts to provide such a framework. However, they all lack the ability to predict many of the parameters that each of the theories utilize. String theory may yield a solution to this naturalness (or self-predictiveness) problem as well as offer a unifed theory of gravity. Studies in particle physics phenomenology based on perturbative low energy analysis of various string theories can help determine the candidacy of such models. After a review of principles and problems leading up to our current understanding of the universe, we will discuss some of the best particle physics model building techniques that have been developed using string theory. This will culminate in the introduction of a novel approach to a computational, systematic analysis of the various physical phenomena that arise from these string models. We focus on the necessary assumptions, complexity and open questions that arise while making a fully-automated at direction analysis program.

  6. Geophysical technique for mineral exploration and discrimination based on electromagnetic methods and associated systems

    DOEpatents

    Zhdanov,; Michael, S [Salt Lake City, UT

    2008-01-29

    Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.

  7. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  8. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  9. Advanced graphical user interface for multi-physics simulations using AMST

    NASA Astrophysics Data System (ADS)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  10. A Standard for RF Modulation Factor,

    DTIC Science & Technology

    1979-09-01

    Mathematics of Physics and Chemistry, pp. 474-477 (D. Van Nostrand Co., Inc., New York, N.Y., 1943). [23] Graybill , F. A., An Introduction to Linear ...circuit model . The primary limitation on the quadratic technique is the linearity and bandwidth of the analog multiplier. A high speed (5 MHz...o ...... . ..... 39 7.2.1. Nonlinearity Model ............................................... 41 7.2.2. Model Parameters

  11. Prospecting for new physics in the Higgs and flavor sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishara, Fady

    We explore two directions in beyond the standard model physics: dark matter model building and probing new sources of CP violation. In dark matter model building, we consider two scenarios where the stability of dark matter derives from the flavor symmetries of the standard model. The first model contains a flavor singlet dark matter candidate whose couplings to the visible sector are proportional to the flavor breaking parameters. This leads to a metastable dark matter with TeV scale mediators. In the second model, we consider a fully gauged SU(3) 3 flavor model with a flavor triplet dark matter. Consequently, the dark matter multiplet is charged while the standard model fields are neutral under a remnant Z 3 which ensures dark matter stability. We show that a Dirac fermion dark matter with radiative splitting in the multiplet must have a mass in the range [0:5; 5] TeV in order to satisfy all experimental constraints. We then turn our attention to Higgs portal dark matter and investigate the possibility of obtaining bounds on the up, down, and strange quark Yukawa couplings. If Higgs portal dark matter is discovered, we find that direct detection rates are insensitive to vanishing light quark Yukawa couplings. We then review flavor models and give the expected enhancement or suppression of the Yukawa couplings in those models. Finally, in the last two chapters, we develop techniques for probing CP violation in the Higgs coupling to photons and in rare radiative decays of B mesons. While theoretically clean, we find that these methods are not practical with current and planned detectors. However, these techniques can be useful with a dedicated detector (e.g., a gaseous TPC). In the case of radiative B meson decay B 0 → (K* → Kππ) γ, the techniques we develop also allow the extraction of the photon polarization fraction which is sensitive to new physics contributions since, in the standard model, the right(left) handed polarization fraction is of O( Λ QCD=m b) formore » $$\\bar{B}^{0}$$(B 0) meson decays.« less

  12. Physical models of collective cell motility: from cell to tissue

    NASA Astrophysics Data System (ADS)

    Camley, B. A.; Rappel, W.-J.

    2017-03-01

    In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.

  13. Advection modes by optimal mass transfer

    NASA Astrophysics Data System (ADS)

    Iollo, Angelo; Lombardi, Damiano

    2014-02-01

    Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.

  14. D Modelling and Rapid Prototyping for Cardiovascular Surgical Planning - Two Case Studies

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Remondino, F.; Uccheddu, F.; Gallo, M.; Gerosa, G.

    2016-06-01

    In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice.

  15. Modal Damping Ratio and Optimal Elastic Moduli of Human Body Segments for Anthropometric Vibratory Model of Standing Subjects.

    PubMed

    Gupta, Manoj; Gupta, T C

    2017-10-01

    The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.

  16. Quantum physics: Interactions propel a magnetic dance

    NASA Astrophysics Data System (ADS)

    Leblanc, Lindsay J.

    2017-06-01

    A combination of leading-edge techniques has enabled interaction-induced magnetic motion to be observed for pairs of ultracold atoms -- a breakthrough in the development of models of complex quantum behaviour. See Letter p.519

  17. A New Computational Framework for Atmospheric and Surface Remote Sensing

    NASA Technical Reports Server (NTRS)

    Timucin, Dogan A.

    2004-01-01

    A Bayesian data-analysis framework is described for atmospheric and surface retrievals from remotely-sensed hyper-spectral data. Some computational techniques are high- lighted for improved accuracy in the forward physics model.

  18. High-Accurate, Physics-Based Wake Simulation Techniques

    DTIC Science & Technology

    2015-01-27

    to accepting the use of computational fluid dynamics models to supplement some of the research. The scientists Lewellen and Lewellen [13] in 1996...resolved in today’s climate es- pecially concerning CFD and experimental. Multiple programs have been established such as the Aircraft Vortex Spacing ...step the entire matrix is solved at once creating inconsistencies when applied to the physics of a fluid mechanics problem where information changes

  19. International comparison of observation-specific spatial buffers: maximizing the ability to estimate physical activity.

    PubMed

    Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper

    2017-01-23

    Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.

  20. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  1. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.

    2014-09-01

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.

  2. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems in modelling and possible solution strategies and finally, we address open questions.

  3. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  5. Physical-mathematical model of optical radiation interaction with biological tissues

    NASA Astrophysics Data System (ADS)

    Kozlovska, Tetyana I.; Kolisnik, Peter F.; Zlepko, Sergey M.; Titova, Natalia V.; Pavlov, Volodymyr S.; Wójcik, Waldemar; Omiotek, Zbigniew; Kozhambardiyeva, Miergul; Zhanpeisova, Aizhan

    2017-08-01

    Remote photoplethysmography (PPG) imaging is an optical technique to remotely assess the local coetaneous microcirculation. In this paper, we present a model and supporting experiments confirming the contribution of skin inhomogeneity to the morphology of PPG waveforms. The physical-mathematical model of distribution of optical radiation in biological tissues was developed. It allows determining the change of intensity of optical radiation depending on such parameters as installation angle of the sensor, biological tissue thickness and the wavelength. We obtained graphics which represent changes of the optical radiation intensity that is registered by photodetector depending on installation angle of the sensor, biological tissue thickness and the extinction coefficient.

  6. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.

  7. Recent advances in hypersonic technology

    NASA Technical Reports Server (NTRS)

    Dwoyer, Douglas L.

    1990-01-01

    This paper will focus on recent advances in hypersonic aerodynamic prediction techniques. Current capabilities of existing numerical methods for predicting high Mach number flows will be discussed and shortcomings will be identified. Physical models available for inclusion into modern codes for predicting the effects of transition and turbulence will also be outlined and their limitations identified. Chemical reaction models appropriate to high-speed flows will be addressed, and the impact of their inclusion in computational fluid dynamics codes will be discussed. Finally, the problem of validating predictive techniques for high Mach number flows will be addressed.

  8. Model-Based Control using Model and Mechanization Fusion Techniques for Image-Aided Navigation

    DTIC Science & Technology

    2009-03-01

    Magnet Motors . Magna Physics Publishing, Hillsboro, OH, 1994. 7. Houwu Bai, Xubo Song, Eric Wan and Andriy Myronenko. “Vision-only Navi- gation and...filter”. Proceedings of the Recent Advances in Space Technologies (RAST). Nov 2003. 6. Hendershot, J.R. and Tje Miller. Design of Brushless Permanent

  9. Modelling Question Difficulty in an A Level Physics Examination

    ERIC Educational Resources Information Center

    Crisp, Victoria; Grayson, Rebecca

    2013-01-01

    "Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…

  10. Experience in using a numerical scheme with artificial viscosity at solving the Riemann problem for a multi-fluid model of multiphase flow

    NASA Astrophysics Data System (ADS)

    Bulovich, S. V.; Smirnov, E. M.

    2018-05-01

    The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.

  11. Using Technology to Facilitate and Enhance Project-based Learning in Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Duda, Gintaras

    2011-04-01

    Problem-based and project-based learning are two pedagogical techniques that have several clear advantages over traditional instructional methods: 1) both techniques are active and student centered, 2) students confront real-world and/or highly complex problems, and 3) such exercises model the way science and engineering are done professionally. This talk will present an experiment in project/problem-based learning in a mathematical physics course. The group project in the course involved modeling a zombie outbreak of the type seen in AMC's ``The Walking Dead.'' Students researched, devised, and solved their mathematical models for the spread of zombie-like infection. Students used technology in all stages; in fact, since analytical solutions to the models were often impossible, technology was a necessary and critical component of the challenge. This talk will explore the use of technology in general in problem and project-based learning and will detail some specific examples of how technology was used to enhance student learning in this course. A larger issue of how students use the Internet to learn will also be explored.

  12. Spatial modeling of environmental vulnerability of marine finfish aquaculture using GIS-based neuro-fuzzy techniques.

    PubMed

    Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G

    2011-08-01

    Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Real-time simulation of biological soft tissues: a PGD approach.

    PubMed

    Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F

    2013-05-01

    We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.

  14. NeuroPhysics: Studying how neurons create the perception of space-time using Physics' tools and techniques

    NASA Astrophysics Data System (ADS)

    Dhingra, Shonali; Sandler, Roman; Rios, Rodrigo; Vuong, Cliff; Mehta, Mayank

    All animals naturally perceive the abstract concept of space-time. A brain region called the Hippocampus is known to be important in creating these perceptions, but the underlying mechanisms are unknown. In our lab we employ several experimental and computational techniques from Physics to tackle this fundamental puzzle. Experimentally, we use ideas from Nanoscience and Materials Science to develop techniques to measure the activity of hippocampal neurons, in freely-behaving animals. Computationally, we develop models to study neuronal activity patterns, which are point processes that are highly stochastic and multidimensional. We then apply these techniques to collect and analyze neuronal signals from rodents while they're exploring space in Real World or Virtual Reality with various stimuli. Our findings show that under these conditions neuronal activity depends on various parameters, such as sensory cues including visual and auditory, and behavioral cues including, linear and angular, position and velocity. Further, neuronal networks create internally-generated rhythms, which influence perception of space and time. In totality, these results further our understanding of how the brain develops a cognitive map of our surrounding space, and keep track of time.

  15. Application of solar energy to air conditioning systems

    NASA Technical Reports Server (NTRS)

    Nash, J. M.; Harstad, A. J.

    1976-01-01

    The results of a survey of solar energy system applications of air conditioning are summarized. Techniques discussed are both solar powered (absorption cycle and the heat engine/Rankine cycle) and solar related (heat pump). Brief descriptions of the physical implications of various air conditioning techniques, discussions of status, proposed technological improvements, methods of utilization and simulation models are presented, along with an extensive bibliography of related literature.

  16. Basic Remote Sensing Investigations for Beach Reconnaissance.

    DTIC Science & Technology

    Progress is reported on three tasks designed to develop remote sensing beach reconnaissance techniques applicable to the benthic, beach intertidal...and beach upland zones. Task 1 is designed to develop remote sensing indicators of important beach composition and physical parameters which will...ultimately prove useful in models to predict beach conditions. Task 2 is designed to develop remote sensing techniques for survey of bottom features in

  17. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    NASA Astrophysics Data System (ADS)

    DeVisscher, A.; Dewulf, J.; Van Durme, J.; Leys, C.; Morent, R.; Van Langenhove, H.

    2008-02-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation.

  18. Perceived sports competence mediates the relationship between childhood motor skill proficiency and adolescent physical activity and fitness: a longitudinal assessment

    PubMed Central

    Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R

    2008-01-01

    Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148

  19. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  20. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  1. Workshop: Economic Research and Policy Concerning Water Use and Watershed Management (1999)

    EPA Pesticide Factsheets

    Workshop proceedings: Integrating Economic and Physical Models in Water and Watershed Research, Methods for Measuring Stakeholder Values of Water Quality and Watershed Protection, and Applications of Stakeholder Valuation Techniques for Watersheds and WQ

  2. Psychophysically based model of surface gloss perception

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.; Pellacini, Fabio; Greenberg, Donald P.

    2001-06-01

    In this paper we introduce a new model of surface appearance that is based on quantitative studies of gloss perception. We use image synthesis techniques to conduct experiments that explore the relationships between the physical dimensions of glossy reflectance and the perceptual dimensions of glossy appearance. The product of these experiments is a psychophysically-based model of surface gloss, with dimensions that are both physically and perceptually meaningful and scales that reflect our sensitivity to gloss variations. We demonstrate that the model can be used to describe and control the appearance of glossy surfaces in synthesis images, allowing prediction of gloss matches and quantification of gloss differences. This work represents some initial steps toward developing psychophyscial models of the goniometric aspects of surface appearance to complement widely-used colorimetric models.

  3. A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1988-01-01

    Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.

  4. (Bayesian) Inference for X-ray Timing

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela

    2016-07-01

    Fourier techniques have been incredibly successful in describing variability of X-ray binaries (XRBs) and Active Galactic Nuclei (AGN). The detection and characterization of both broadband noise components and quasi-periodic oscillations as well as their behavior in the context of spectral changes during XRB outbursts has become an important tool for studying the physical processes of accretion and ejection in these systems. In this talk, I will review state-of-the-art techniques for characterizing variability in compact objects and show how these methods help us understand the causes of the observed variability and how we may use it to probe fundamental physics. Despite numerous successes, however, it has also become clear that many scientific questions cannot be answered with traditional timing methods alone. I will therefore also present recent advances, some in the time domain like CARMA, to modeling variability with generative models and discuss where these methods might lead us in the future.

  5. Near Earth Asteroid Characteristics for Asteroid Threat Assessment

    NASA Technical Reports Server (NTRS)

    Dotson, Jessie

    2015-01-01

    Information about the physical characteristics of Near Earth Asteroids (NEAs) is needed to model behavior during atmospheric entry, to assess the risk of an impact, and to model possible mitigation techniques. The intrinsic properties of interest to entry and mitigation modelers, however, rarely are directly measureable. Instead we measure other properties and infer the intrinsic physical properties, so determining the complete set of characteristics of interest is far from straightforward. In addition, for the majority of NEAs, only the basic measurements exist so often properties must be inferred from statistics of the population of more completely characterized objects. We will provide an assessment of the current state of knowledge about the physical characteristics of importance to asteroid threat assessment. In addition, an ongoing effort to collate NEA characteristics into a readily accessible database for use by the planetary defense community will be discussed.

  6. Novel dark matter phenomenology at colliders

    NASA Astrophysics Data System (ADS)

    Wardlow, Kyle Patrick

    While a suitable candidate particle for dark matter (DM) has yet to be discovered, it is possible one will be found by experiments currently investigating physics on the weak scale. If discovered on that energy scale, the dark matter will likely be producible in significant quantities at colliders like the LHC, allowing the properties of and underlying physical model characterizing the dark matter to be precisely determined. I assume that the dark matter will be produced as one of the decay products of a new massive resonance related to physics beyond the Standard Model, and using the energy distributions of the associated visible decay products, develop techniques for determining the symmetry protecting these potential dark matter candidates from decaying into lighter Standard Model (SM) particles and to simultaneously measure the masses of both the dark matter candidate and the particle from which it decays.

  7. Behaviour change techniques in physical activity interventions for men with prostate cancer: A systematic review.

    PubMed

    Hallward, Laura; Patel, Nisha; Duncan, Lindsay R

    2018-02-01

    Physical activity interventions can improve prostate cancer survivors' health. Determining the behaviour change techniques used in physical activity interventions can help elucidate the mechanisms by which an intervention successfully changes behaviour. The purpose of this systematic review was to identify and evaluate behaviour change techniques in physical activity interventions for prostate cancer survivors. A total of 7 databases were searched and 15 studies were retained. The studies included a mean 6.87 behaviour change techniques (range = 3-10), and similar behaviour change techniques were implemented in all studies. Consideration of how behaviour change techniques are implemented may help identify how behaviour change techniques enhance physical activity interventions for prostate cancer survivors.

  8. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  9. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  10. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  11. Using Rasch Analysis to Examine the Dimensionality Structure and Differential Item Functioning of the Arabic Version of the Perceived Physical Ability Scale for Children

    ERIC Educational Resources Information Center

    Abd-El-Fattah, Sabry M.; AL-Sinani, Yousra; El Shourbagi, Sahar; Fakhroo, Hessa A.

    2014-01-01

    This study uses the Rasch model technique to examine the dimensionality structure and differential item functioning of the Arabic version of the Perceived Physical Ability Scale for Children (PPASC). A sample of 220 Omani fourth graders (120 males and 100 females) responded to an Arabic translated version of the PPASC. Data on students'…

  12. Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques

    NASA Astrophysics Data System (ADS)

    Spencer, E.; Rao, A.; Horton, W.; Mays, L.

    2008-12-01

    ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201

  13. Optimization of the ANFIS using a genetic algorithm for physical work rate classification.

    PubMed

    Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali

    2018-03-13

    Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.

  14. Using integral dispersion relations to extend the LHC reach for new physics

    NASA Astrophysics Data System (ADS)

    Denton, Peter B.; Weiler, Thomas J.

    2014-02-01

    Many models of electroweak symmetry breaking predict new particles with masses at or just beyond LHC energies. Even if these particles are too massive to be produced on-shell at the LHC, it may be possible to see evidence of their existence through the use of integral dispersion relations (IDRs). Making use of Cauchy's integral formula and the analyticity of the scattering amplitude, IDRs are sensitive in principle to changes in the cross section at arbitrarily large energies. We investigate some models of new physics. We find that a sudden, order-one increase in the cross section above new particle mass thresholds can be inferred well below the threshold energy. On the other hand, for two more physical models of particle production, we show that the reach in energy and the signal strength of the IDR technique is greatly reduced. The peak sensitivity for the IDR technique is shown to occur when the new particle masses are near the machine energy, an energy where direct production of new particles is kinematically disallowed, phase-space suppressed, or, if applicable, suppressed by the soft parton distribution functions. Thus, IDRs do extend the reach of the LHC, but only to a window around Mχ˜√sLHC .

  15. Bayesian analysis of anisotropic cosmologies: Bianchi VIIh and WMAP

    NASA Astrophysics Data System (ADS)

    McEwen, J. D.; Josset, T.; Feeney, S. M.; Peiris, H. V.; Lasenby, A. N.

    2013-12-01

    We perform a definitive analysis of Bianchi VIIh cosmologies with Wilkinson Microwave Anisotropy Probe (WMAP) observations of the cosmic microwave background (CMB) temperature anisotropies. Bayesian analysis techniques are developed to study anisotropic cosmologies using full-sky and partial-sky masked CMB temperature data. We apply these techniques to analyse the full-sky internal linear combination (ILC) map and a partial-sky masked W-band map of WMAP 9 yr observations. In addition to the physically motivated Bianchi VIIh model, we examine phenomenological models considered in previous studies, in which the Bianchi VIIh parameters are decoupled from the standard cosmological parameters. In the two phenomenological models considered, Bayes factors of 1.7 and 1.1 units of log-evidence favouring a Bianchi component are found in full-sky ILC data. The corresponding best-fitting Bianchi maps recovered are similar for both phenomenological models and are very close to those found in previous studies using earlier WMAP data releases. However, no evidence for a phenomenological Bianchi component is found in the partial-sky W-band data. In the physical Bianchi VIIh model, we find no evidence for a Bianchi component: WMAP data thus do not favour Bianchi VIIh cosmologies over the standard Λ cold dark matter (ΛCDM) cosmology. It is not possible to discount Bianchi VIIh cosmologies in favour of ΛCDM completely, but we are able to constrain the vorticity of physical Bianchi VIIh cosmologies at (ω/H)0 < 8.6 × 10-10 with 95 per cent confidence.

  16. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    PubMed

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Toward a virtual building laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klems, J.H.; Finlayson, E.U.; Olsen, T.H.

    1999-03-01

    In order to achieve in a timely manner the large energy and dollar savings technically possible through improvements in building energy efficiency, it will be necessary to solve the problem of design failure risk. The most economical method of doing this would be to learn to calculate building performance with sufficient detail, accuracy and reliability to avoid design failure. Existing building simulation models (BSM) are a large step in this direction, but are still not capable of this level of modeling. Developments in computational fluid dynamics (CFD) techniques now allow one to construct a road map from present BSM's tomore » a complete building physical model. The most useful first step is a building interior model (BIM) that would allow prediction of local conditions affecting occupant health and comfort. To provide reliable prediction a BIM must incorporate the correct physical boundary conditions on a building interior. Doing so raises a number of specific technical problems and research questions. The solution of these within a context useful for building research and design is not likely to result from other research on CFD, which is directed toward the solution of different types of problems. A six-step plan for incorporating the correct boundary conditions within the context of the model problem of a large atrium has been outlined. A promising strategy for constructing a BIM is the overset grid technique for representing a building space in a CFD calculation. This technique promises to adapt well to building design and allows a step-by-step approach. A state-of-the-art CFD computer code using this technique has been adapted to the problem and can form the departure point for this research.« less

  18. Physics Mining of Multi-Source Data Sets

    NASA Technical Reports Server (NTRS)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  19. Statistical physics of vehicular traffic and some related systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debashish; Santen, Ludger; Schadschneider, Andreas

    2000-05-01

    In the so-called “microscopic” models of vehicular traffic, attention is paid explicitly to each individual vehicle each of which is represented by a “particle”; the nature of the “interactions” among these particles is determined by the way the vehicles influence each others’ movement. Therefore, vehicular traffic, modeled as a system of interacting “particles” driven far from equilibrium, offers the possibility to study various fundamental aspects of truly nonequilibrium systems which are of current interest in statistical physics. Analytical as well as numerical techniques of statistical physics are being used to study these models to understand rich variety of physical phenomena exhibited by vehicular traffic. Some of these phenomena, observed in vehicular traffic under different circumstances, include transitions from one dynamical phase to another, criticality and self-organized criticality, metastability and hysteresis, phase-segregation, etc. In this critical review, written from the perspective of statistical physics, we explain the guiding principles behind all the main theoretical approaches. But we present detailed discussions on the results obtained mainly from the so-called “particle-hopping” models, particularly emphasizing those which have been formulated in recent years using the language of cellular automata.

  20. The applications of statistical quantification techniques in nanomechanics and nanoelectronics.

    PubMed

    Mai, Wenjie; Deng, Xinwei

    2010-10-08

    Although nanoscience and nanotechnology have been developing for approximately two decades and have achieved numerous breakthroughs, the experimental results from nanomaterials with a higher noise level and poorer repeatability than those from bulk materials still remain as a practical issue, and challenge many techniques of quantification of nanomaterials. This work proposes a physical-statistical modeling approach and a global fitting statistical method to use all the available discrete data or quasi-continuous curves to quantify a few targeted physical parameters, which can provide more accurate, efficient and reliable parameter estimates, and give reasonable physical explanations. In the resonance method for measuring the elastic modulus of ZnO nanowires (Zhou et al 2006 Solid State Commun. 139 222-6), our statistical technique gives E = 128.33 GPa instead of the original E = 108 GPa, and unveils a negative bias adjustment f(0). The causes are suggested by the systematic bias in measuring the length of the nanowires. In the electronic measurement of the resistivity of a Mo nanowire (Zach et al 2000 Science 290 2120-3), the proposed new method automatically identified the importance of accounting for the Ohmic contact resistance in the model of the Ohmic behavior in nanoelectronics experiments. The 95% confidence interval of resistivity in the proposed one-step procedure is determined to be 3.57 +/- 0.0274 x 10( - 5) ohm cm, which should be a more reliable and precise estimate. The statistical quantification technique should find wide applications in obtaining better estimations from various systematic errors and biased effects that become more significant at the nanoscale.

  1. Maternal factors predicting cognitive and behavioral characteristics of children with fetal alcohol spectrum disorders.

    PubMed

    May, Philip A; Tabachnick, Barbara G; Gossage, J Phillip; Kalberg, Wendy O; Marais, Anna-Susan; Robinson, Luther K; Manning, Melanie A; Blankenship, Jason; Buckley, David; Hoyme, H Eugene; Adnams, Colleen M

    2013-06-01

    To provide an analysis of multiple predictors of cognitive and behavioral traits for children with fetal alcohol spectrum disorders (FASDs). Multivariate correlation techniques were used with maternal and child data from epidemiologic studies in a community in South Africa. Data on 561 first-grade children with fetal alcohol syndrome (FAS), partial FAS (PFAS), and not FASD and their mothers were analyzed by grouping 19 maternal variables into categories (physical, demographic, childbearing, and drinking) and used in structural equation models (SEMs) to assess correlates of child intelligence (verbal and nonverbal) and behavior. A first SEM using only 7 maternal alcohol use variables to predict cognitive/behavioral traits was statistically significant (B = 3.10, p < .05) but explained only 17.3% of the variance. The second model incorporated multiple maternal variables and was statistically significant explaining 55.3% of the variance. Significantly correlated with low intelligence and problem behavior were demographic (B = 3.83, p < .05) (low maternal education, low socioeconomic status [SES], and rural residence) and maternal physical characteristics (B = 2.70, p < .05) (short stature, small head circumference, and low weight). Childbearing history and alcohol use composites were not statistically significant in the final complex model and were overpowered by SES and maternal physical traits. Although other analytic techniques have amply demonstrated the negative effects of maternal drinking on intelligence and behavior, this highly controlled analysis of multiple maternal influences reveals that maternal demographics and physical traits make a significant enabling or disabling contribution to child functioning in FASD.

  2. Aerosol physical properties from satellite horizon inversion

    NASA Technical Reports Server (NTRS)

    Gray, C. R.; Malchow, H. L.; Merritt, D. C.; Var, R. E.; Whitney, C. K.

    1973-01-01

    The feasibility is investigated of determining the physical properties of aerosols globally in the altitude region of 10 to 100 km from a satellite horizon scanning experiment. The investigation utilizes a horizon inversion technique previously developed and extended. Aerosol physical properties such as number density, size distribution, and the real and imaginary components of the index of refraction are demonstrated to be invertible in the aerosol size ranges (0.01-0.1 microns), (0.1-1.0 microns), (1.0-10 microns). Extensions of previously developed radiative transfer models and recursive inversion algorithms are displayed.

  3. Overview of the Meso-NH model version 5.4 and its applications

    NASA Astrophysics Data System (ADS)

    Lac, Christine; Chaboureau, Jean-Pierre; Masson, Valéry; Pinty, Jean-Pierre; Tulet, Pierre; Escobar, Juan; Leriche, Maud; Barthe, Christelle; Aouizerats, Benjamin; Augros, Clotilde; Aumond, Pierre; Auguste, Franck; Bechtold, Peter; Berthet, Sarah; Bielli, Soline; Bosseur, Frédéric; Caumont, Olivier; Cohard, Jean-Martial; Colin, Jeanne; Couvreux, Fleur; Cuxart, Joan; Delautier, Gaëlle; Dauhut, Thibaut; Ducrocq, Véronique; Filippi, Jean-Baptiste; Gazen, Didier; Geoffroy, Olivier; Gheusi, François; Honnert, Rachel; Lafore, Jean-Philippe; Lebeaupin Brossier, Cindy; Libois, Quentin; Lunet, Thibaut; Mari, Céline; Maric, Tomislav; Mascart, Patrick; Mogé, Maxime; Molinié, Gilles; Nuissier, Olivier; Pantillon, Florian; Peyrillé, Philippe; Pergaud, Julien; Perraud, Emilie; Pianezze, Joris; Redelsperger, Jean-Luc; Ricard, Didier; Richard, Evelyne; Riette, Sébastien; Rodier, Quentin; Schoetter, Robert; Seyfried, Léo; Stein, Joël; Suhre, Karsten; Taufour, Marie; Thouron, Odile; Turner, Sandra; Verrelle, Antoine; Vié, Benoît; Visentin, Florian; Vionnet, Vincent; Wautelet, Philippe

    2018-05-01

    This paper presents the Meso-NH model version 5.4. Meso-NH is an atmospheric non hydrostatic research model that is applied to a broad range of resolutions, from synoptic to turbulent scales, and is designed for studies of physics and chemistry. It is a limited-area model employing advanced numerical techniques, including monotonic advection schemes for scalar transport and fourth-order centered or odd-order WENO advection schemes for momentum. The model includes state-of-the-art physics parameterization schemes that are important to represent convective-scale phenomena and turbulent eddies, as well as flows at larger scales. In addition, Meso-NH has been expanded to provide capabilities for a range of Earth system prediction applications such as chemistry and aerosols, electricity and lightning, hydrology, wildland fires, volcanic eruptions, and cyclones with ocean coupling. Here, we present the main innovations to the dynamics and physics of the code since the pioneer paper of Lafore et al. (1998) and provide an overview of recent applications and couplings.

  4. Welding arc plasma physics

    NASA Technical Reports Server (NTRS)

    Cain, Bruce L.

    1990-01-01

    The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.

  5. Perspective: Reaches of chemical physics in biology.

    PubMed

    Gruebele, Martin; Thirumalai, D

    2013-09-28

    Chemical physics as a discipline contributes many experimental tools, algorithms, and fundamental theoretical models that can be applied to biological problems. This is especially true now as the molecular level and the systems level descriptions begin to connect, and multi-scale approaches are being developed to solve cutting edge problems in biology. In some cases, the concepts and tools got their start in non-biological fields, and migrated over, such as the idea of glassy landscapes, fluorescence spectroscopy, or master equation approaches. In other cases, the tools were specifically developed with biological physics applications in mind, such as modeling of single molecule trajectories or super-resolution laser techniques. In this introduction to the special topic section on chemical physics of biological systems, we consider a wide range of contributions, all the way from the molecular level, to molecular assemblies, chemical physics of the cell, and finally systems-level approaches, based on the contributions to this special issue. Chemical physicists can look forward to an exciting future where computational tools, analytical models, and new instrumentation will push the boundaries of biological inquiry.

  6. Perspective: Reaches of chemical physics in biology

    PubMed Central

    Gruebele, Martin; Thirumalai, D.

    2013-01-01

    Chemical physics as a discipline contributes many experimental tools, algorithms, and fundamental theoretical models that can be applied to biological problems. This is especially true now as the molecular level and the systems level descriptions begin to connect, and multi-scale approaches are being developed to solve cutting edge problems in biology. In some cases, the concepts and tools got their start in non-biological fields, and migrated over, such as the idea of glassy landscapes, fluorescence spectroscopy, or master equation approaches. In other cases, the tools were specifically developed with biological physics applications in mind, such as modeling of single molecule trajectories or super-resolution laser techniques. In this introduction to the special topic section on chemical physics of biological systems, we consider a wide range of contributions, all the way from the molecular level, to molecular assemblies, chemical physics of the cell, and finally systems-level approaches, based on the contributions to this special issue. Chemical physicists can look forward to an exciting future where computational tools, analytical models, and new instrumentation will push the boundaries of biological inquiry. PMID:24089712

  7. A model for teaching and learning spinal thrust manipulation and its effect on participant confidence in technique performance

    PubMed Central

    Wise, Christopher H.; Schenk, Ronald J.; Lattanzi, Jill Black

    2016-01-01

    Background Despite emerging evidence to support the use of high velocity thrust manipulation in the management of lumbar spinal conditions, utilization of thrust manipulation among clinicians remains relatively low. One reason for the underutilization of these procedures may be related to disparity in training in the performance of these techniques at the professional and post professional levels. Purpose To assess the effect of using a new model of active learning on participant confidence in the performance of spinal thrust manipulation and the implications for its use in the professional and post-professional training of physical therapists. Methods A cohort of 15 DPT students in their final semester of entry-level professional training participated in an active training session emphasizing a sequential partial task practice (SPTP) strategy in which participants engaged in partial task practice over several repetitions with different partners. Participants’ level of confidence in the performance of these techniques was determined through comparison of pre- and post-training session surveys and a post-session open-ended interview. Results The increase in scores across all items of the individual pre- and post-session surveys suggests that this model was effective in changing overall participant perception regarding the effectiveness and safety of these techniques and in increasing student confidence in their performance. Interviews revealed that participants greatly preferred the SPTP strategy, which enhanced their confidence in technique performance. Conclusion Results indicate that this new model of psychomotor training may be effective at improving confidence in the performance of spinal thrust manipulation and, subsequently, may be useful for encouraging the future use of these techniques in the care of individuals with impairments of the spine. Inasmuch, this method of instruction may be useful for training of physical therapists at both the professional and post-professional levels. PMID:27559284

  8. Ultracold Neutron Sources

    NASA Astrophysics Data System (ADS)

    Martin, Jeffery

    2016-09-01

    The free neutron is an excellent laboratory for searches for physics beyond the standard model. Ultracold neutrons (UCN) are free neutrons that can be confined to material, magnetic, and gravitational traps. UCN are compelling for experiments requiring long observation times, high polarization, or low energies. The challenge of experiments has been to create enough UCN to reach the statistical precision required. Production techniques involving neutron interactions with condensed matter systems have resulted in some successes, and new UCN sources are being pursued worldwide to exploit higher UCN densities offered by these techniques. I will review the physics of how the UCN sources work, along with the present status of the world's efforts. research supported by NSERC, CFI, and CRC.

  9. The Physics of a Gymnastics Flight Element

    NASA Astrophysics Data System (ADS)

    Contakos, Jonas; Carlton, Les G.; Thompson, Bruce; Suddaby, Rick

    2009-09-01

    From its inception, performance in the sport of gymnastics has relied on the laws of physics to create movement patterns and static postures that appear almost impossible. In general, gymnastics is physics in motion and can provide an ideal framework for studying basic human modeling techniques and physical principles. Using low-end technology and basic principles of physics, we analyzed a high-end gymnastics skill competed in by both men and women. The comprehensive goal of the examination is to scientifically understand how a skill of this magnitude is actually physically possible and what must a gymnast do to successfully complete the skill. The examination is divided into three sections, each of which is comprehensive enough to be a separate assignment or small group project.

  10. Bulk Growth of Wide Band Gap II-VI Compound Semiconductors by Physical Vapor Transport

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua

    1997-01-01

    The mechanism of physical vapor transport of II-VI semiconducting compounds was studied both theoretically, using a one-dimensional diffusion model, as well as experimentally. It was found that the vapor phase stoichiometry is critical in determining the vapor transport rate. The experimental heat treatment methods to control the vapor composition over the starting materials were investigated and the effectiveness of the heat treatments was confirmed by partial pressure measurements using an optical absorption technique. The effect of residual (foreign) gas on the transport rate was also studies theoretically by the diffusion model and confirmed experimentally by the measurements of total pressure and compositions of the residual gas. An in-situ dynamic technique for the transport rate measurements and a further extension of the technique that simultaneously measured the partial pressures and transport rates were performed and, for the first time, the experimentally determined mass fluxes were compared with those calculated, without any adjustable parameters, from the diffusion model. Using the information obtained from the experimental transport rate measurements as guideline high quality bulk crystal of wide band gap II-VI semiconductor were grown from the source materials which undergone the same heat treatment methods. The grown crystals were then extensively characterized with emphasis on the analysis of the crystalline structural defects.

  11. R symmetries and a heterotic MSSM

    NASA Astrophysics Data System (ADS)

    Kappl, Rolf; Nilles, Hans Peter; Schmitz, Matthias

    2015-02-01

    We employ powerful techniques based on Hilbert and Gröbner bases to analyze particle physics models derived from string theory. Individual models are shown to have a huge landscape of vacua that differ in their phenomenological properties. We explore the (discrete) symmetries of these vacua, the new R symmetry selection rules and their consequences for moduli stabilization.

  12. Particle-mesh techniques

    NASA Technical Reports Server (NTRS)

    Macneice, Peter

    1995-01-01

    This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.

  13. Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN

    NASA Astrophysics Data System (ADS)

    Schneider, Evan; Impey, C.

    2012-01-01

    We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.

  14. Evolution and enabling capabilities of spatially resolved techniques for the characterization of heterogeneously catalyzed reactions

    DOE PAGES

    Morgan, Kevin; Touitou, Jamal; Choi, Jae -Soon; ...

    2016-01-15

    The development and optimization of catalysts and catalytic processes requires knowledge of reaction kinetics and mechanisms. In traditional catalyst kinetic characterization, the gas composition is known at the inlet, and the exit flow is measured to determine changes in concentration. As such, the progression of the chemistry within the catalyst is not known. Technological advances in electromagnetic and physical probes have made visualizing the evolution of the chemistry within catalyst samples a reality, as part of a methodology commonly known as spatial resolution. Herein, we discuss and evaluate the development of spatially resolved techniques, including the evolutions and achievements ofmore » this growing area of catalytic research. The impact of such techniques is discussed in terms of the invasiveness of physical probes on catalytic systems, as well as how experimentally obtained spatial profiles can be used in conjunction with kinetic modeling. Moreover, some aims and aspirations for further evolution of spatially resolved techniques are considered.« less

  15. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  16. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  17. 4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model

    NASA Astrophysics Data System (ADS)

    Tuna, Hakan; Arikan, Feza; Arikan, Orhan

    2016-07-01

    Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  18. Service Learning In Physics: The Consultant Model

    NASA Astrophysics Data System (ADS)

    Guerra, David

    2005-04-01

    Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, anmore » advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.« less

  20. From Particle Physics to Medical Applications

    NASA Astrophysics Data System (ADS)

    Dosanjh, Manjit

    2017-06-01

    CERN is the world's largest particle physics research laboratory. Since it was established in 1954, it has made an outstanding contribution to our understanding of the fundamental particles and their interactions, and also to the technologies needed to analyse their properties and behaviour. The experimental challenges have pushed the performance of particle accelerators and detectors to the limits of our technical capabilities, and these groundbreaking technologies can also have a significant impact in applications beyond particle physics. In particular, the detectors developed for particle physics have led to improved techniques for medical imaging, while accelerator technologies lie at the heart of the irradiation methods that are widely used for treating cancer. Indeed, many important diagnostic and therapeutic techniques used by healthcare professionals are based either on basic physics principles or the technologies developed to carry out physics research. Ever since the discovery of x-rays by Roentgen in 1895, physics has been instrumental in the development of technologies in the biomedical domain, including the use of ionizing radiation for medical imaging and therapy. Some key examples that are explored in detail in this book include scanners based on positron emission tomography, as well as radiation therapy for cancer treatment. Even the collaborative model of particle physics is proving to be effective in catalysing multidisciplinary research for medical applications, ensuring that pioneering physics research is exploited for the benefit of all.

  1. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.

  2. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.

  3. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  4. Some case studies of ocean wave physical processes utilizing the GSFC airborne radar ocean wave spectrometer

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.

    1984-01-01

    The NASA K sub u band Radar Ocean Wave Spectrometer (ROWS) is an experimental prototype of a possible future satellite instrument for low data rate global waves measurements. The ROWS technique, which utilizes short pulse radar altimeters in a conical scan mode near vertical incidence to map the directional slope spectrum in wave number and azimuth, is briefly described. The potential of the technique is illustrated by some specific case studies of wave physical processes utilizing the aircraft ROWS data. These include: (1) an evaluation of numerical hindcast model performance in storm sea conditions, (2) a study of fetch limited wave growth, and (3) a study of the fully developed sea state. Results of these studies, which are briefly summarized, show how directional wave spectral observations from a mobile platform can contribute enormously to our understanding of wave physical processes.

  5. Crustal modeling of the central part of the Northern Western Desert, Egypt using gravity data

    NASA Astrophysics Data System (ADS)

    Alrefaee, H. A.

    2017-05-01

    The Bouguer anomaly map of the central part of the Northern Western Desert, Egypt was used to construct six 2D gravity models to investigate the nature, physical properties and structures of the crust and upper mantle. The crustal models were constrained and constructed by integrating results from different geophysical techniques and available geological information. The depth to the basement surface, from eight wells existed across the study area, and the depth to the Conrad and Moho interfaces as well as physical properties of sediments, basement, crust and upper mantle from previous petrophysical and crustal studies were used to establish the gravity models. Euler deconvolution technique was carried on the Bouguer anomaly map to detect the subsurface fault trends. Edge detection techniques were calculated to outlines the boundaries of subsurface structural features. Basement structural map was interpreted to reveal the subsurface structural setting of the area. The crustal models reveals increasing of gravity field from the south to the north due to northward thinning of the crust. The models reveals also deformed and rugged basement surface with northward depth increasing from 1.6 km to 6 km. In contrast to the basement, the Conrad and Moho interfaces are nearly flat and get shallower northward where the depth to the Conrad or the thickness of the upper crust ranges from 18 km to 21 km while the depth to the Moho (crustal thickness) ranges from 31.5 km to 34 km. The crust beneath the study area is normal continental crust with obvious thinning toward the continental margin at the Mediterranean coast.

  6. Graphing Reality

    ERIC Educational Resources Information Center

    Beeken, Paul

    2014-01-01

    Graphing is an essential skill that forms the foundation of any physical science. Understanding the relationships between measurements ultimately determines which modeling equations are successful in predicting observations. Over the years, science and math teachers have approached teaching this skill with a variety of techniques. For secondary…

  7. The dynamic radiation environment assimilation model (DREAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeves, Geoffrey D; Koller, Josef; Tokar, Robert L

    2010-01-01

    The Dynamic Radiation Environment Assimilation Model (DREAM) is a 3-year effort sponsored by the US Department of Energy to provide global, retrospective, or real-time specification of the natural and potential nuclear radiation environments. The DREAM model uses Kalman filtering techniques that combine the strengths of new physical models of the radiation belts with electron observations from long-term satellite systems such as GPS and geosynchronous systems. DREAM includes a physics model for the production and long-term evolution of artificial radiation belts from high altitude nuclear explosions. DREAM has been validated against satellites in arbitrary orbits and consistently produces more accurate resultsmore » than existing models. Tools for user-specific applications and graphical displays are in beta testing and a real-time version of DREAM has been in continuous operation since November 2009.« less

  8. The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems

    PubMed Central

    Munro, Peter R.T.; Ignatyev, Konstantin; Speller, Robert D.; Olivo, Alessandro

    2013-01-01

    X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. PMID:20389424

  9. The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems.

    PubMed

    Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro

    2010-03-01

    X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation.

  10. Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.

    2003-01-01

    Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.

  11. COMPUTATIONAL CHALLENGES IN BUILDING MULTI-SCALE AND MULTI-PHYSICS MODELS OF CARDIAC ELECTRO-MECHANICS

    PubMed Central

    Plank, G; Prassl, AJ; Augustin, C

    2014-01-01

    Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050

  12. A Goddard Multi-Scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.; hide

    2008-01-01

    Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.

  13. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  14. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  15. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  16. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  17. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  18. Control-based continuation: Bifurcation and stability analysis for physical experiments

    NASA Astrophysics Data System (ADS)

    Barton, David A. W.

    2017-02-01

    Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physical experiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper.

  19. Mutual information, neural networks and the renormalization group

    NASA Astrophysics Data System (ADS)

    Koch-Janusz, Maciej; Ringel, Zohar

    2018-06-01

    Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.

  20. Photogrammetric techniques for aerospace applications

    NASA Astrophysics Data System (ADS)

    Liu, Tianshu; Burner, Alpheus W.; Jones, Thomas W.; Barrows, Danny A.

    2012-10-01

    Photogrammetric techniques have been used for measuring the important physical quantities in both ground and flight testing including aeroelastic deformation, attitude, position, shape and dynamics of objects such as wind tunnel models, flight vehicles, rotating blades and large space structures. The distinct advantage of photogrammetric measurement is that it is a non-contact, global measurement technique. Although the general principles of photogrammetry are well known particularly in topographic and aerial survey, photogrammetric techniques require special adaptation for aerospace applications. This review provides a comprehensive and systematic summary of photogrammetric techniques for aerospace applications based on diverse sources. It is useful mainly for aerospace engineers who want to use photogrammetric techniques, but it also gives a general introduction for photogrammetrists and computer vision scientists to new applications.

  1. User's manual for the REEDM (Rocket Exhaust Effluent Diffusion Model) computer program

    NASA Technical Reports Server (NTRS)

    Bjorklund, J. R.; Dumbauld, R. K.; Cheney, C. S.; Geary, H. V.

    1982-01-01

    The REEDM computer program predicts concentrations, dosages, and depositions downwind from normal and abnormal launches of rocket vehicles at NASA's Kennedy Space Center. The atmospheric dispersion models, cloud-rise models, and other formulas used in the REEDM model are described mathematically Vehicle and source parameters, other pertinent physical properties of the rocket exhaust cloud, and meteorological layering techniques are presented as well as user's instructions for REEDM. Worked example problems are included.

  2. Representing functions/procedures and processes/structures for analysis of effects of failures on functions and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Leifker, Daniel B.

    1991-01-01

    Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.

  3. Theoretical Calculations of Atomic Data for Spectroscopy

    NASA Technical Reports Server (NTRS)

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  4. Physical Modeling Techniques for Missile and Other Protective Structures

    DTIC Science & Technology

    1983-06-29

    uniaxial load only. In general , axial thrust was applied with an: initial eccentricity of zero on the specimen end. Sixteen different combinations of Pa...conditioning electronics and cabling schemes is included. The techniques described generally represent current approaches at the Civil Engineering Research...at T- zero and stopping when a pulse is generated by the pi-ezoelectric disc on arrival of! the detonation wave front. All elapsed time data is stored

  5. NASA Hybrid Reflectometer Project

    NASA Technical Reports Server (NTRS)

    Lynch, Dana; Mancini, Ron (Technical Monitor)

    2002-01-01

    Time-domain and frequency-domain reflectometry have been used for about forty years to locate opens and shorts in cables. Interpretation of reflectometry data is as much art as science. Is there information in the data that is being missed? Can the reflectometers be improved to allow us to detect and locate defects in cables that are not outright shorts or opens? The Hybrid Reflectometer Project was begun this year at NASA Ames Research Center, initially to model wire physics, simulating time-domain reflectometry (TDR) signals in those models and validating the models against actual TDR data taken on testbed cables. Theoretical models of reflectometry in wires will give us an understanding of the merits and limits of these techniques and will guide the application of a proposed hybrid reflectometer with the aim of enhancing reflectometer sensitivity to the point that wire defects can be detected. We will point out efforts by some other researchers to apply wire physics models to the problem of defect detection in wires and we will describe our own initial efforts to create wire physics models and report on testbed validation of the TDR simulations.

  6. Starspot detection and properties

    NASA Astrophysics Data System (ADS)

    Savanov, I. S.

    2013-07-01

    I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.

  7. How to Determine the Centre of Mass of Bodies from Image Modelling

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Rodrigues, Marcelo

    2016-01-01

    Image modelling is a recent technique in physics education that includes digital tools for image treatment and analysis, such as digital stroboscopic photography (DSP) and video analysis software. It is commonly used to analyse the motion of objects. In this work we show how to determine the position of the centre of mass (CM) of objects with…

  8. Wildfire potential evaluation during a drought event with a regional climate model and NDVI

    Treesearch

    Y. Liu; J. Stanturf; S. Goodrick

    2010-01-01

    Regional climate modeling is a technique for simulating high-resolution physical processes in the atmosphere, soil and vegetation. It can be used to evaluate wildfire potential by either providing meteorological conditions for computation of fire indices or predicting soil moisture as a direct measure of fire potential. This study examines these roles using a regional...

  9. Testing Model with "Check Technique" for Physics Education

    ERIC Educational Resources Information Center

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  10. Supersonic reacting internal flowfields

    NASA Astrophysics Data System (ADS)

    Drummond, J. P.

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  11. Supersonic reacting internal flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1989-01-01

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  12. Cross-Section Measurements via the Activation Technique at the Cologne Clover Counting Setup

    NASA Astrophysics Data System (ADS)

    Heim, Felix; Mayer, Jan; Netterdon, Lars; Scholz, Philipp; Zilges, Andreas

    The activation technique is a widely used method for the determination of cross-section values for charged-particle induced reactions at astrophysically relevant energies. Since network calculations of nucleosynthesis processes often depend on reaction rates calculated in the scope of the Hauser-Feshbach statistical model, these cross-sections can be used to improve the nuclear-physics input-parameters like optical-model potentials (OMP), γ-ray strength functions, and nuclear level densities. In order to extend the available experimental database, the 108Cd(α, n)111Sn reaction cross section was investigated at ten energies between 10.2 and 13.5 MeV. As this reaction at these energies is almost only sensitive on the α-decay width, the results were compared to statistical model calculations using different models for the α-OMP. The irradiation as well as the consecutive γ-ray counting were performed at the Institute for Nuclear Physics of the University of Cologne using the 10 MV FN-Tandem accelerator and the Cologne Clover Counting Setup. This setup consists of two clover- type high purity germanium (HPGe) detectors in a close face-to-face geometry to cover a solid angle of almost 4π.

  13. Male-initiated partner abuse during marital separation prior to divorce.

    PubMed

    Toews, Michelle L; McKenry, Patrick C; Catlett, Beth S

    2003-08-01

    The purpose of this study was to assess predictors of male-initiated psychological and physical partner abuse during the separation process prior to divorce among a sample of 80 divorced fathers who reported no physical violence during their marriages. The predictor variables examined were male gender-role identity, female-initiated divorces, dependence on one's former wife, depression, anxiety, and coparental conflict. Through ordinary least square (OLS) regression techniques, it was found that male gender-role identity was positively related to male-initiated psychological abuse during separation. Logistic regression analyses revealed that male-initiated psychological abuse, anxiety level, coparental conflict, and dependence on one's former spouse increased the odds of a man engaging in physical abuse. However, depression decreased the odds of separation physical abuse. The models predicting both male-initiated psychological abuse (F = 2.20, p < .05, R2 = .15) and physical violence during the separation process were significant (Model chi2 = 35.00, df= 7, p < .001).

  14. Modelling fungal growth in heterogeneous soil: analyses of the effect of soil physical structure on fungal community dynamics

    NASA Astrophysics Data System (ADS)

    Falconer, R.; Radoslow, P.; Grinev, D.; Otten, W.

    2009-04-01

    Fungi play a pivital role in soil ecosystems contributing to plant productivity. The underlying soil physical and biological processes responsible for community dynamics are interrelated and, at present, poorly understood. If these complex processes can be understood then this knowledge can be managed with an aim to providing more sustainable agriculture. Our understanding of microbial dynamics in soil has long been hampered by a lack of a theoretical framework and difficulties in observation and quantification. We will demonstrate how the spatial and temporal dynamics of fungi in soil can be understood by linking mathematical modelling with novel techniques that visualise the complex structure of the soil. The combination of these techniques and mathematical models opens up new possibilities to understand how the physical structure of soil affects fungal colony dynamics and also how fungal dynamics affect soil structure. We will quantify, using X ray tomography, soil structure for a range of artificially prepared microcosms. We characterise the soil structures using soil metrics such as porosity, fractal dimension, and the connectivity of the pore volume. Furthermore we will use the individual based fungal colony growth model of Falconer et al. 2005, which is based on the physiological processes of fungi, to assess the effect of soil structure on microbial dynamics by qualifying biomass abundances and distributions. We demonstrate how soil structure can critically affect fungal species interactions with consequences for biological control and fungal biodiversity.

  15. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  16. Physical and property victimization behind bars: a multilevel examination.

    PubMed

    Lahm, Karen F

    2009-06-01

    The majority of the extant literature on inmate victimization considers only one level of analysis, thus ignoring the interaction effects between inmate- and prison-level variables. To extend this literature, multilevel modeling techniques were used to analyze self-report data from more than 1,000 inmates and 30 prisons in Kentucky, Tennessee, and Ohio. Results revealed that demographic variables were strong predictors of physical victimization (i.e., race and assaultive behavior). Also, security level had a contextual direct effect on physical victimization. Property victimization was best explained with an integrated model including inmate (i.e., race, assaultive behavior, prior education, prior employment, and time served), contextual (i.e., security level and proportion non-White), and micro-macro interaction variables (i.e., Race x Security Level). Policy implications and suggestions for future research are discussed.

  17. Maxwell Prize Talk: Scaling Laws for the Dynamical Plasma Phenomena

    NASA Astrophysics Data System (ADS)

    Ryutov, Livermore, Ca 94550, Usa, D. D.

    2017-10-01

    The scaling and similarity technique is a powerful tool for developing and testing reduced models of complex phenomena, including plasma phenomena. The technique has been successfully used in identifying appropriate simplified models of transport in quasistationary plasmas. In this talk, the similarity and scaling arguments will be applied to highly dynamical systems, in which temporal evolution of the plasma leads to a significant change of plasma dimensions, shapes, densities, and other parameters with respect to initial state. The scaling and similarity techniques for dynamical plasma systems will be presented as a set of case studies of problems from various domains of the plasma physics, beginning with collisonless plasmas, through intermediate collisionalities, to highly collisional plasmas describable by the single-fluid MHD. Basic concepts of the similarity theory will be introduced along the way. Among the results discussed are: self-similarity of Langmuir turbulence driven by a hot electron cloud expanding into a cold background plasma; generation of particle beams in disrupting pinches; interference between collisionless and collisional phenomena in the shock physics; similarity for liner-imploded plasmas; MHD similarities with an emphasis on the effect of small-scale (turbulent) structures on global dynamics. Relations between astrophysical phenomena and scaled laboratory experiments will be discussed.

  18. Electrodynamic balance-mass spectrometry of single particles as a new platform for atmospheric chemistry research

    NASA Astrophysics Data System (ADS)

    Birdsall, Adam W.; Krieger, Ulrich K.; Keutsch, Frank N.

    2018-01-01

    New analytical techniques are needed to improve our understanding of the intertwined physical and chemical processes that affect the composition of aerosol particles in the Earth's atmosphere, such as gas-particle partitioning and homogenous or heterogeneous chemistry, and their ultimate relation to air quality and climate. We describe a new laboratory setup that couples an electrodynamic balance (EDB) to a mass spectrometer (MS). The EDB stores a single laboratory-generated particle in an electric field under atmospheric conditions for an arbitrarily long length of time. The particle is then transferred via gas flow to an ionization region that vaporizes and ionizes the analyte molecules before MS measurement. We demonstrate the feasibility of the technique by tracking evaporation of polyethylene glycol molecules and finding agreement with a kinetic model. Fitting data to the kinetic model also allows determination of vapor pressures to within a factor of 2. This EDB-MS system can be used to study fundamental chemical and physical processes involving particles that are difficult to isolate and study with other techniques. The results of such measurements can be used to improve our understanding of atmospheric particles.

  19. Estuarine research; an annotated bibliography of selected literature, with emphasis on the Hudson River estuary, New York and New Jersey

    USGS Publications Warehouse

    Embree, William N.; Wiltshire, Denise A.

    1978-01-01

    Abstracts of 177 selected publications on water movement in estuaries, particularly the Hudson River estuary, are compiled for reference in Hudson River studies. Subjects represented are the hydraulic, chemical, and physical characteristics of estuarine waters, estuarine modeling techniques, and methods of water-data collection and analysis. Summaries are presented in five categories: Hudson River estuary studies; hydrodynamic-model studies; water-quality-model studies; reports on data-collection equipment and methods; and bibliographies, literature reviews, conference proceedings, and textbooks. An author index is included. Omitted are most works published before 1965, environmental-impact statements, theses and dissertations, policy or planning reports, regional or economic reports, ocean studies, studies based on physical models, and foreign studies. (Woodard-USGS)

  20. Using entropy to cut complex time series

    NASA Astrophysics Data System (ADS)

    Mertens, David; Poncela Casasnovas, Julia; Spring, Bonnie; Amaral, L. A. N.

    2013-03-01

    Using techniques from statistical physics, physicists have modeled and analyzed human phenomena varying from academic citation rates to disease spreading to vehicular traffic jams. The last decade's explosion of digital information and the growing ubiquity of smartphones has led to a wealth of human self-reported data. This wealth of data comes at a cost, including non-uniform sampling and statistically significant but physically insignificant correlations. In this talk I present our work using entropy to identify stationary sub-sequences of self-reported human weight from a weight management web site. Our entropic approach-inspired by the infomap network community detection algorithm-is far less biased by rare fluctuations than more traditional time series segmentation techniques. Supported by the Howard Hughes Medical Institute

  1. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  2. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  3. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  4. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  5. Preface: Special Topic on Single-Molecule Biophysics

    NASA Astrophysics Data System (ADS)

    Makarov, Dmitrii E.; Schuler, Benjamin

    2018-03-01

    Single-molecule measurements are now almost routinely used to study biological systems and processes. The scope of this special topic emphasizes the physics side of single-molecule observations, with the goal of highlighting new developments in physical techniques as well as conceptual insights that single-molecule measurements bring to biophysics. This issue also comprises recent advances in theoretical physical models of single-molecule phenomena, interpretation of single-molecule signals, and fundamental areas of statistical mechanics that are related to single-molecule observations. A particular goal is to illustrate the increasing synergy between theory, simulation, and experiment in single-molecule biophysics.

  6. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  7. Meeting on the Physical Oceanography of Sea Straits (2nd). Held in Villefanche-sur-Mer, France on 15-19 April 2002

    DTIC Science & Technology

    2002-04-19

    apply in the presence of mixing and dissipation. Some people prefer to think of control in terms of information transmission (wave propagation...2002, in preparation. Officer. C . B., Physical Oceanography of Estuaries. John Wiley and Sons, 1976. Pawlak, G. & Armi, L. Vortex dynamics in a...few decades. Hard thinking , new obser- 1964. vational techniques. and increasingly sophisticated models Gerdes. F, C . Garrett, and D. Farmer, On

  8. Techniques for SMM/THz Chemical Analysis: Investigations and Exploitation of the Large Molecule Limit

    DTIC Science & Technology

    2014-03-03

    Society of America, (12 2012): 2643. doi: Christopher F. Neese, Ivan R. Medvedev, Grant M. Plummer, Aaron J. Frank, Christopher D. Ball, Frank C. De...NCNCS in view of quantum monodromy, Physical Chem Chem Physics, (02 2010): . doi: Ivan R. Medvedev, Christopher F. Neese, Grant M. Plummer, Frank C...Christopher F. Neese, Frank C. De Lucia, Ivan R. Medvedev, Bob D. Guenther . Terahertz Signature Modeling for Kill Assessment and Warhead Materials

  9. Maternal Factors Predicting Cognitive and Behavioral Characteristics of Children with Fetal Alcohol Spectrum Disorders

    PubMed Central

    May, Philip A.; Tabachnick, Barbara G.; Gossage, J. Phillip; Kalberg, Wendy O.; Marais, Anna-Susan; Robinson, Luther K.; Manning, Melanie A.; Blankenship, Jason; Buckley, David; Hoyme, H. Eugene; Adnams, Colleen M.

    2013-01-01

    Objective To provide an analysis of multiple predictors of cognitive and behavioral traits for children with fetal alcohol spectrum disorders (FASD). Method Multivariate correlation techniques were employed with maternal and child data from epidemiologic studies in a community in South Africa. Data on 561 first grade children with fetal alcohol syndrome (FAS), partial FAS (PFAS), and not FASD and their mothers were analyzed by grouping 19 maternal variables into categories (physical, demographic, childbearing, and drinking) and employed in structural equation models (SEM) to assess correlates of child intelligence (verbal and non-verbal) and behavior. Results A first SEM utilizing only seven maternal alcohol use variables to predict cognitive/behavioral traits was statistically significant (B = 3.10, p < .05), but explained only 17.3% of the variance. The second model incorporated multiple maternal variables and was statistically significant explaining 55.3% of the variance. Significantly correlated with low intelligence and problem behavior were demographic (B = 3.83, p < .05) (low maternal education, low socioeconomic status (SES), and rural residence) and maternal physical characteristics (B = 2.70, p < .05) (short stature, small head circumference, and low weight). Childbearing history and alcohol use composites were not statistically significant in the final complex model, and were overpowered by SES and maternal physical traits. Conclusions While other analytic techniques have amply demonstrated the negative effects of maternal drinking on intelligence and behavior, this highly-controlled analysis of multiple maternal influences reveals that maternal demographics and physical traits make a significant enabling or disabling contribution to child functioning in FASD. PMID:23751886

  10. WE-D-303-00: Computational Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  11. What Factors Determine the Uptake of A-level Physics?

    NASA Astrophysics Data System (ADS)

    Gill, Tim; Bell, John F.

    2013-03-01

    There has been much concern recently in the UK about the decline in the number of students studying physics beyond age 16. To investigate why this might be we used data from a national database of student qualifications and a multilevel modelling technique to investigate which factors had the greatest impact on the uptake of physics at Advanced Level (A-level) in a particular year. Each factor of interest was entered into a separate model, while accounting for prior attainment and gender (both well-known predictors of A-level uptake). We found that factors associated with greater probability of uptake included better attainment in physics (or combined science) and maths qualifications at age 16 in comparison to other subjects, and (for girls only) attending an independent or grammar school. While it is difficult to address these factors directly, the results imply that more needs to be done to improve relative performance at General Certificate of Secondary Education, perhaps by increasing the supply of specialist physics teachers at this level and to overcome the perception (especially among girls) that physics is a particularly difficult subject.

  12. Research in Lattice Gauge Theory and in the Phenomenology of Neutrinos and Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meurice, Yannick L; Reno, Mary Hall

    Research in theoretical elementary particle physics was performed by the PI Yannick Meurice and co-PI Mary Hall Reno. New techniques designed for precision calculations of strong interaction physics were developed using the tensor renormalization group method. Large-scale Monte Carlo simulations with dynamical quarks were performed for candidate models for Higgs compositeness. Ab-initio lattice gauge theory calculations of semileptonic decays of B-mesons observed in collider experiments and relevant to test the validity of the standard model were performed with the Fermilab/MILC collaboration. The phenomenology of strong interaction physics was applied to new predictions for physics processes in accelerator physics experiments andmore » to cosmic ray production and interactions. A research focus has been on heavy quark production and their decays to neutrinos. The heavy quark contributions to atmospheric neutrino and muon fluxes have been evaluated, as have the neutrino fluxes from accelerator beams incident on heavy targets. Results are applicable to current and future particle physics experiments and to astrophysical neutrino detectors such as the IceCube Neutrino Observatory.« less

  13. Monitoring a Complex Physical System using a Hybrid Dynamic Bayes Net

    NASA Technical Reports Server (NTRS)

    Lerner, Uri; Moses, Brooks; Scott, Maricia; McIlraith, Sheila; Keller, Daphne

    2005-01-01

    The Reverse Water Gas Shift system (RWGS) is a complex physical system designed to produce oxygen from the carbon dioxide atmosphere on Mars. If sent to Mars, it would operate without human supervision, thus requiring a reliable automated system for monitoring and control. The RWGS presents many challenges typical of real-world systems, including: noisy and biased sensors, nonlinear behavior, effects that are manifested over different time granularities, and unobservability of many important quantities. In this paper we model the RWGS using a hybrid (discrete/continuous) Dynamic Bayesian Network (DBN), where the state at each time slice contains 33 discrete and 184 continuous variables. We show how the system state can be tracked using probabilistic inference over the model. We discuss how to deal with the various challenges presented by the RWGS, providing a suite of techniques that are likely to be useful in a wide range of applications. In particular, we describe a general framework for dealing with nonlinear behavior using numerical integration techniques, extending the successful Unscented Filter. We also show how to use a fixed-point computation to deal with effects that develop at different time scales, specifically rapid changes occuring during slowly changing processes. We test our model using real data collected from the RWGS, demonstrating the feasibility of hybrid DBNs for monitoring complex real-world physical systems.

  14. A response surface methodology based damage identification technique

    NASA Astrophysics Data System (ADS)

    Fang, S. E.; Perera, R.

    2009-06-01

    Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.

  15. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  16. Physically based modeling in catchment hydrology at 50: Survey and outlook

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Putti, Mario

    2015-09-01

    Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.

  17. Study on the physical and non-physical drag coefficients for spherical satellites

    NASA Astrophysics Data System (ADS)

    Man, Haijun; Li, Huijun; Tang, Geshi

    In this study, the physical and non-physical drag coefficients (C_D) for spherical satellites in ANDERR are retrieved from the number density of atomic oxygen and the orbit decay data, respectively. We concern on what changes should be taken to the retrieved physical C_D and non-physical C_D as the accuracy of the atmospheric density model is improved. Firstly, Lomb-Scargle periodograms to these C_D series as well as the environmental parameters indicate that: (1) there are obvious 5-, 7-, and 9-day periodic variations in the daily Ap indices and the solar wind speed at 1 AU as well as the model density, which has been reported as a result from the interaction between the corotating solar wind and the magnetosphere; (2) The same short periods also exist in the retrieved C_D except for the significance level for each C_D series; (3) the physical and non-physical C_D have behaved almost homogeneously with model densities along the satellite trajectory. Secondly, corrections to each type of C_D are defined as the differences between the values derived from the density model of NRLMSISE-00 and that of JB2008. It has shown that: (1) the bigger the density corrections are, the bigger the corrections to C_D of both types have. In addition, corrections to the physical C_D distribute within an extension of 0.05, which is about an order lower than the extension that the non-physical C_D distribute (0.5). (2) Corrections to the non-physical C_D behaved reciprocally to the density corrections, while a similar relationship is also existing between corrections to the physical C_D and that of the model density. (3) As the orbital altitude are lower than 200 km, corrections to the C_D and the model density are both decreased asymptotically to zero. Results in this study highlight that the physical C_D for spherical satellites should play an important role in technique renovations for accurate density corrections with the orbital decay data or in searching for a way to decouple the product of density and C_D wrapped in the orbital decay data.

  18. A cognitive approach to game usability and design: mental model development in novice real-time strategy gamers.

    PubMed

    Graham, John; Zheng, Liya; Gonzalez, Cleotilde

    2006-06-01

    We developed a technique to observe and characterize a novice real-time-strategy (RTS) player's mental model as it shifts with experience. We then tested this technique using an off-the-shelf RTS game, EA Games Generals. Norman defined mental models as, "an internal representation of a target system that provides predictive and explanatory power to the operator." In the case of RTS games, the operator is the player and the target system is expressed by the relationships within the game. We studied five novice participants in laboratory-controlled conditions playing a RTS game. They played Command and Conquer Generals for 2 h per day over the course of 5 days. A mental model analysis was generated using player dissimilarity-ratings of the game's artificial intelligence (AI) agents analyzed using multidimensional scaling (MDS) statistical methods. We hypothesized that novices would begin with an impoverished model based on the visible physical characteristics of the game system. As they gained experience and insight, their mental models would shift and accommodate the functional characteristics of the AI agents. We found that all five of the novice participants began with the predicted physical-based mental model. However, while their models did qualitatively shift with experience, they did not necessarily change to the predicted functional-based model. This research presents an opportunity for the design of games that are guided by shifts in a player's mental model as opposed to the typical progression through successive performance levels.

  19. Recent Advances in Ionospheric Modeling Using the USU GAIM Data Assimilation Models

    NASA Astrophysics Data System (ADS)

    Scherliess, L.; Thompson, D. C.; Schunk, R. W.

    2009-12-01

    The ionospheric plasma distribution at low and mid latitudes has been shown to display both a background state (climatology) and a disturbed state (weather). Ionospheric climatology has been successfully modeled, but ionospheric weather has been much more difficult to model because the ionosphere can vary significantly on an hour-by-hour basis. Unfortunately, ionospheric weather can have detrimental effects on several human activities and systems, including high-frequency communications, over-the-horizon radars, and survey and navigation systems using Global Positioning System (GPS) satellites. As shown by meteorologists and oceanographers, the most reliable weather models are physics-based, data-driven models that use Kalman filter or other data assimilation techniques. Since the state of a medium (ocean, lower atmosphere, ionosphere) is driven by complex and frequently nonlinear internal and external processes, it is not possible to accurately specify all of the drivers and initial conditions of the medium. Therefore physics-based models alone cannot provide reliable specifications and forecasts. In an effort to better understand the ionosphere and to mitigate its adverse effects on military and civilian operations, specification and forecast models are being developed that use state-of-the-art data assimilation techniques. Over the past decade, Utah State University (USU) has developed two data assimilation models for the ionosphere as part of the USU Global Assimilation of Ionospheric Measurements (GAIM) program and one of these models has been implemented at the Air Force Weather Agency for operational use. The USU-GAIM models are also being used for scientific studies, and this should lead to a dramatic advance in our understanding of ionospheric physics; similar to what occurred in meteorology and oceanography after the introduction of data assimilation models in those fields. Both USU-GAIM models are capable of assimilating data from a variety of data sources, including in situ electron densities from satellites, bottomside electron density profiles from ionosondes, total electron content (TEC) measurements between ground receivers and the GPS satellites, occultation data from satellite constellations, and ultraviolet emissions from the ionosphere measured by satellites. We will present the current status of the model development and discuss the employed data assimilation technique. Recent examples of the ionosphere specifications obtained from our model runs will be presented with an emphasis on the ionospheric plasma distribution during the current low solar activity conditions. Various comparisons with independent data will also be shown in an effort to validate the models.

  20. Ultrasonic imaging of seismic physical models using a fringe visibility enhanced fiber-optic Fabry-Perot interferometric sensor.

    PubMed

    Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui

    2018-04-16

    A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.

  1. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  2. Upscaling soil saturated hydraulic conductivity from pore throat characteristics

    NASA Astrophysics Data System (ADS)

    Ghanbarian, Behzad; Hunt, Allen G.; Skaggs, Todd H.; Jarvis, Nicholas

    2017-06-01

    Upscaling and/or estimating saturated hydraulic conductivity Ksat at the core scale from microscopic/macroscopic soil characteristics has been actively under investigation in the hydrology and soil physics communities for several decades. Numerous models have been developed based on different approaches, such as the bundle of capillary tubes model, pedotransfer functions, etc. In this study, we apply concepts from critical path analysis, an upscaling technique first developed in the physics literature, to estimate saturated hydraulic conductivity at the core scale from microscopic pore throat characteristics reflected in capillary pressure data. With this new model, we find Ksat estimations to be within a factor of 3 of the average measured saturated hydraulic conductivities reported by Rawls et al. (1982) for the eleven USDA soil texture classes.

  3. Physics and Process Modeling (PPM) and Other Propulsion R and T. Volume 1; Materials Processing, Characterization, and Modeling; Lifting Models

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This CP contains the extended abstracts and presentation figures of 36 papers presented at the PPM and Other Propulsion R&T Conference. The focus of the research described in these presentations is on materials and structures technologies that are parts of the various projects within the NASA Aeronautics Propulsion Systems Research and Technology Base Program. These projects include Physics and Process Modeling; Smart, Green Engine; Fast, Quiet Engine; High Temperature Engine Materials Program; and Hybrid Hyperspeed Propulsion. Also presented were research results from the Rotorcraft Systems Program and work supported by the NASA Lewis Director's Discretionary Fund. Authors from NASA Lewis Research Center, industry, and universities conducted research in the following areas: material processing, material characterization, modeling, life, applied life models, design techniques, vibration control, mechanical components, and tribology. Key issues, research accomplishments, and future directions are summarized in this publication.

  4. A Self-Paced Physical Geology Laboratory.

    ERIC Educational Resources Information Center

    Watson, Donald W.

    1983-01-01

    Describes a self-paced geology course utilizing a diversity of instructional techniques, including maps, models, samples, audio-visual materials, and a locally developed laboratory manual. Mechanical features are laboratory exercises, followed by unit quizzes; quizzes are repeated until the desired level of competence is attained. (Author/JN)

  5. Synthesis of Speaker Facial Movement to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.

    1994-01-01

    A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.

  6. Developing a Motivational Strategy.

    ERIC Educational Resources Information Center

    Janson, Robert

    1979-01-01

    Describes the use of job enrichment techniques as tools for increased productivity and organizational change. The author's motivational work design model changes not only the job design but also structural elements such as physical layout, workflow, and organizational relationships. Behavior change is more important than job enrichment. (MF)

  7. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The various physical processes that occur in the gas turbine combustor and the development of analytical models that accurately describe these processes are discussed. Aspects covered include fuel sprays; fluid mixing; combustion dynamics; radiation and chemistry and numeric techniques which can be applied to highly turbulent, recirculating, reacting flow fields.

  8. Investigation of Oil Fluorescence as a Technique for the Remote Sensing of Oil Spills

    DOT National Transportation Integrated Search

    1971-06-01

    The flexibility of remote sensing of oil spills by laser-excited oil fluorescence is investigated. The required parameters are fed into a physical model to predict signal and background levels; and the predictions are verified by field experiments. A...

  9. Transportation Systems Evaluation

    NASA Technical Reports Server (NTRS)

    Fanning, M. L.; Michelson, R. A.

    1972-01-01

    A methodology for the analysis of transportation systems consisting of five major interacting elements is reported. The analysis begins with the causes of travel demand: geographic, economic, and demographic characteristics as well as attitudes toward travel. Through the analysis, the interaction of these factors with the physical and economic characteristics of the transportation system is determined. The result is an evaluation of the system from the point of view of both passenger and operator. The methodology is applicable to the intraurban transit systems as well as major airlines. Applications of the technique to analysis of a PRT system and a study of intraurban air travel are given. In the discussion several unique models or techniques are mentioned: i.e., passenger preference modeling, an integrated intraurban transit model, and a series of models to perform airline analysis.

  10. Shock waves and shock tubes; Proceedings of the Fifteenth International Symposium, Berkeley, CA, July 28-August 2, 1985

    NASA Technical Reports Server (NTRS)

    Bershader, D. (Editor); Hanson, R. (Editor)

    1986-01-01

    A detailed survey is presented of shock tube experiments, theoretical developments, and applications being carried out worldwide. The discussions explore shock tube physics and the related chemical, physical and biological science and technology. Extensive attention is devoted to shock wave phenomena in dusty gases and other multiphase and heterogeneous systems, including chemically reactive mixtures. Consideration is given to techniques for measuring, visualizing and theoretically modeling flowfield, shock wave and rarefaction wave characteristics. Numerical modeling is explored in terms of the application of computational fluid dynamics techniques to describing flowfields in shock tubes. Shock interactions and propagation, in both solids, fluids, gases and mixed media are investigated, along with the behavior of shocks in condensed matter. Finally, chemical reactions that are initiated as the result of passage of a shock wave are discussed, together with methods of controlling the evolution of laminar separated flows at concave corners on advanced reentry vehicles.

  11. Shock waves and shock tubes; Proceedings of the Fifteenth International Symposium, Berkeley, CA, July 28-August 2, 1985

    NASA Astrophysics Data System (ADS)

    Bershader, D.; Hanson, R.

    A detailed survey is presented of shock tube experiments, theoretical developments, and applications being carried out worldwide. The discussions explore shock tube physics and the related chemical, physical and biological science and technology. Extensive attention is devoted to shock wave phenomena in dusty gases and other multiphase and heterogeneous systems, including chemically reactive mixtures. Consideration is given to techniques for measuring, visualizing and theoretically modeling flowfield, shock wave and rarefaction wave characteristics. Numerical modeling is explored in terms of the application of computational fluid dynamics techniques to describing flowfields in shock tubes. Shock interactions and propagation, in both solids, fluids, gases and mixed media are investigated, along with the behavior of shocks in condensed matter. Finally, chemical reactions that are initiated as the result of passage of a shock wave are discussed, together with methods of controlling the evolution of laminar separated flows at concave corners on advanced reentry vehicles.

  12. Physics, Techniques and Review of Neuroradiological Applications of Diffusion Kurtosis Imaging (DKI).

    PubMed

    Marrale, M; Collura, G; Brai, M; Toschi, N; Midiri, F; La Tona, G; Lo Casto, A; Gagliardo, C

    2016-12-01

    In recent years many papers about diagnostic applications of diffusion tensor imaging (DTI) have been published. This is because DTI allows to evaluate in vivo and in a non-invasive way the process of diffusion of water molecules in biological tissues. However, the simplified description of the diffusion process assumed in DTI does not permit to completely map the complex underlying cellular components and structures, which hinder and restrict the diffusion of water molecules. These limitations can be partially overcome by means of diffusion kurtosis imaging (DKI). The aim of this paper is the description of the theory of DKI, a new topic of growing interest in radiology. DKI is a higher order diffusion model that is a straightforward extension of the DTI model. Here, we analyze the physics underlying this method, we report our MRI acquisition protocol with the preprocessing pipeline used and the DKI parametric maps obtained on a 1.5 T scanner, and we review the most relevant clinical applications of this technique in various neurological diseases.

  13. Comparison of Two Conceptually Different Physically-based Hydrological Models - Looking Beyond Streamflows

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.

    2015-12-01

    Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.

  14. The support-control continuum: An investigation of staff perspectives on factors influencing the success or failure of de-escalation techniques for the management of violence and aggression in mental health settings.

    PubMed

    Price, Owen; Baker, John; Bee, Penny; Lovell, Karina

    2018-01-01

    De-escalation techniques are recommended to manage violence and aggression in mental health settings yet restrictive practices continue to be frequently used. Barriers and enablers to the implementation and effectiveness of de-escalation techniques in practice are not well understood. To obtain staff descriptions of de-escalation techniques currently used in mental health settings and explore factors perceived to influence their implementation and effectiveness. Qualitative, semi-structured interviews and Framework Analysis. Five in-patient wards including three male psychiatric intensive care units, one female acute ward and one male acute ward in three UK Mental Health NHS Trusts. 20 ward-based clinical staff. Individual semi-structured interviews were digitally recorded, transcribed verbatim and analysed using a qualitative data analysis software package. Participants described 14 techniques used in response to escalated aggression applied on a continuum between support and control. Techniques along the support-control continuum could be classified in three groups: 'support' (e.g. problem-solving, distraction, reassurance) 'non-physical control' (e.g. reprimands, deterrents, instruction) and 'physical control' (e.g. physical restraint and seclusion). Charting the reasoning staff provided for technique selection against the described behavioural outcome enabled a preliminary understanding of staff, patient and environmental influences on de-escalation success or failure. Importantly, the more coercive 'non-physical control' techniques are currently conceptualised by staff as a feature of de-escalation techniques, yet, there was evidence of a link between these and increased aggression/use of restrictive practices. Risk was not a consistent factor in decisions to adopt more controlling techniques. Moral judgements regarding the function of the aggression; trial-and-error; ingrained local custom (especially around instruction to low stimulus areas); knowledge of the patient; time-efficiency and staff anxiety had a key role in escalating intervention. This paper provides a new model for understanding staff intervention in response to escalated aggression, a continuum between support and control. It further provides a preliminary explanatory framework for understanding the relationship between patient behaviour, staff response and environmental influences on de-escalation success and failure. This framework reveals potentially important behaviour change targets for interventions seeking to reduce violence and use of restrictive practices through enhanced de-escalation techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. What are the most effective intervention techniques for changing physical activity self-efficacy and physical activity behaviour--and are they the same?

    PubMed

    Williams, S L; French, D P

    2011-04-01

    There is convincing evidence that targeting self-efficacy is an effective means of increasing physical activity. However, evidence concerning which are the most effective techniques for changing self-efficacy and thereby physical activity is lacking. The present review aims to estimate the association between specific intervention techniques used in physical activity interventions and change obtained in both self-efficacy and physical activity behaviour. A systematic search yielded 27 physical activity intervention studies for 'healthy' adults that reported self-efficacy and physical activity data. A small, yet significant (P < 0.01) effect of the interventions was found on change in self-efficacy and physical activity (d = 0.16 and 0.21, respectively). When a technique was associated with a change in effect sizes for self-efficacy, it also tended to be associated with a change (r(s) = 0.690, P < 0.001) in effect size for physical activity. Moderator analyses found that 'action planning', 'provide instruction' and 'reinforcing effort towards behaviour' were associated with significantly higher levels of both self-efficacy and physical activity. 'Relapse prevention' and 'setting graded tasks' were associated with significantly lower self-efficacy and physical activity levels. This meta-analysis provides evidence for which psychological techniques are most effective for changing self-efficacy and physical activity.

  16. Determining the metallicity of the solar envelope using seismic inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.

    2017-11-01

    The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.

  17. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  18. Chinese research on shock physics. Studies in Chinese Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, N.H.

    1992-07-01

    Shock wave research encompasses many different disciplines. This monograph limits the scope to Chinese research on solids and is based on available open literature sources. For the purpose of this monograph, the papers are divided into seven groups, i.e. review and tutorial; equations of state; phase transitions; geological materials; modeling and simulations; experimental techniques; and mechanical properties. The largest group of papers is experimental techniques and numbers 22, or about 40% of the total sources.

  19. Head-mounted active noise control system with virtual sensing technique

    NASA Astrophysics Data System (ADS)

    Miyazaki, Nobuhiro; Kajikawa, Yoshinobu

    2015-03-01

    In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.

  20. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  1. Using the EZ-Diffusion Model to Score a Single-Category Implicit Association Test of Physical Activity

    PubMed Central

    Rebar, Amanda L.; Ram, Nilam; Conroy, David E.

    2014-01-01

    Objective The Single-Category Implicit Association Test (SC-IAT) has been used as a method for assessing automatic evaluations of physical activity, but measurement artifact or consciously-held attitudes could be confounding the outcome scores of these measures. The objective of these two studies was to address these measurement concerns by testing the validity of a novel SC-IAT scoring technique. Design Study 1 was a cross-sectional study, and study 2 was a prospective study. Method In study 1, undergraduate students (N = 104) completed SC-IATs for physical activity, flowers, and sedentary behavior. In study 2, undergraduate students (N = 91) completed a SC-IAT for physical activity, self-reported affective and instrumental attitudes toward physical activity, physical activity intentions, and wore an accelerometer for two weeks. The EZ-diffusion model was used to decompose the SC-IAT into three process component scores including the information processing efficiency score. Results In study 1, a series of structural equation model comparisons revealed that the information processing score did not share variability across distinct SC-IATs, suggesting it does not represent systematic measurement artifact. In study 2, the information processing efficiency score was shown to be unrelated to self-reported affective and instrumental attitudes toward physical activity, and positively related to physical activity behavior, above and beyond the traditional D-score of the SC-IAT. Conclusions The information processing efficiency score is a valid measure of automatic evaluations of physical activity. PMID:25484621

  2. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    NASA Astrophysics Data System (ADS)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  3. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    PubMed Central

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  4. A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics

    DTIC Science & Technology

    2017-06-15

    Validation against experimental data •Nonequilibrium radiation transport: coupling with a collisional-radiative model •Inelastic collisions in a MF...for Public Release; Distribution is Unlimited. PA# 17383 Collisional Radiative (CR) Overview Updates • Investigated Quasi -Steady-State • Investigated...Techniques Quasi Stead-State (QSS) • Assumes fast kinetics between states within an ion distribution • Assumes longer diffusion/decay times than

  5. The "Function-to-Flow" Model: An Interdisciplinary Approach to Assessing Movement within and beyond the Context of Climbing

    ERIC Educational Resources Information Center

    Lloyd, Rebecca

    2015-01-01

    Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…

  6. Workshop on Planning and Learning in Multi- Agent Environments

    DTIC Science & Technology

    2014-12-31

    needed for translating the physical aspects of an interaction (see Section 3.1) into the numeric utility values needed for game -theoretic...calculations. Furthermore, the game -theoretic techniques themselves will require significant enhancements. Game -theoretic solution concepts (e.g., Nash...robotics. Real-time strategy games may provide useful data for research on predictive models of ad- versaries, modeling long-term and short-term plans

  7. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  8. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  9. Classification without labels: learning from mixed samples in high energy physics

    DOE PAGES

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-25

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  10. Classification without labels: learning from mixed samples in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  11. An introduction to chaotic and random time series analysis

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1989-01-01

    The origin of chaotic behavior and the relation of chaos to randomness are explained. Two mathematical results are described: (1) a representation theorem guarantees the existence of a specific time-domain model for chaos and addresses the relation between chaotic, random, and strictly deterministic processes; (2) a theorem assures that information on the behavior of a physical system in its complete state space can be extracted from time-series data on a single observable. Focus is placed on an important connection between the dynamical state space and an observable time series. These two results lead to a practical deconvolution technique combining standard random process modeling methods with new embedded techniques.

  12. Numerical study of the magnetized friction force

    NASA Astrophysics Data System (ADS)

    Fedotov, A. V.; Bruhwiler, D. L.; Sidorin, A. O.; Abell, D. T.; Ben-Zvi, I.; Busby, R.; Cary, J. R.; Litvinenko, V. N.

    2006-07-01

    Fundamental advances in experimental nuclear physics will require ion beams with orders of magnitude luminosity increase and temperature reduction. One of the most promising particle accelerator techniques for achieving these goals is electron cooling, where the ion beam repeatedly transfers thermal energy to a copropagating electron beam. The dynamical friction force on a fully ionized gold ion moving through magnetized and unmagnetized electron distributions has been simulated, using molecular dynamics techniques that resolve close binary collisions. We present a comprehensive examination of theoretical models in use by the electron cooling community. Differences in these models are clarified, enabling the accurate design of future electron cooling systems for relativistic ion accelerators.

  13. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  14. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  15. Optimal pacing for running 400- and 800-m track races

    NASA Astrophysics Data System (ADS)

    Reardon, James

    2013-06-01

    We present a toy model of anaerobic glycolysis that utilizes appropriate physiological and mathematical consideration while remaining useful to the athlete. The toy model produces an optimal pacing strategy for 400-m and 800-m races that is analytically calculated via the Euler-Lagrange equation. The calculation of the optimum v(t) is presented in detail, with an emphasis on intuitive arguments in order to serve as a bridge between the basic techniques presented in undergraduate physics textbooks and the more advanced techniques of control theory. Observed pacing strategies in 400-m and 800-m world-record races are found to be well-fit by the toy model, which allows us to draw a new physiological interpretation for the advantages of common weight-training practices.

  16. Future mission studies: Forecasting solar flux directly from its chaotic time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    The mathematical structure of the programs written to construct a nonlinear predictive model to forecast solar flux directly from its time series without reference to any underlying solar physics is presented. This method and the programs are written so that one could apply the same technique to forecast other chaotic time series, such as geomagnetic data, attitude and orbit data, and even financial indexes and stock market data. Perhaps the most important application of this technique to flight dynamics is to model Goddard Trajectory Determination System (GTDS) output of residues between observed position of spacecraft and calculated position with no drag (drag flag = off). This would result in a new model of drag working directly from observed data.

  17. Pulsar timing and general relativity

    NASA Technical Reports Server (NTRS)

    Backer, D. C.; Hellings, R. W.

    1986-01-01

    Techniques are described for accounting for relativistic effects in the analysis of pulsar signals. Design features of instrumentation used to achieve millisecond accuracy in the signal measurements are discussed. The accuracy of the data permits modeling the pulsar physical characteristics from the natural glitches in the emissions. Relativistic corrections are defined for adjusting for differences between the pulsar motion in its spacetime coordinate system relative to the terrestrial coordinate system, the earth's motion, and the gravitational potentials of solar system bodies. Modifications of the model to allow for a binary pulsar system are outlined, including treatment of the system as a point mass. Finally, a quadrupole model is presented for gravitational radiation and techniques are defined for using pulsars in the search for gravitational waves.

  18. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  19. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  20. Connectivity modeling and graph theory analysis predict recolonization in transient populations

    NASA Astrophysics Data System (ADS)

    Rognstad, Rhiannon L.; Wethey, David S.; Oliver, Hilde; Hilbish, Thomas J.

    2018-07-01

    Population connectivity plays a major role in the ecology and evolution of marine organisms. In these systems, connectivity of many species occurs primarily during a larval stage, when larvae are frequently too small and numerous to track directly. To indirectly estimate larval dispersal, ocean circulation models have emerged as a popular technique. Here we use regional ocean circulation models to estimate dispersal of the intertidal barnacle Semibalanus balanoides at its local distribution limit in Southwest England. We incorporate historical and recent repatriation events to provide support for our modeled dispersal estimates, which predict a recolonization rate similar to that observed in two recolonization events. Using graph theory techniques to describe the dispersal landscape, we identify likely physical barriers to dispersal in the region. Our results demonstrate the use of recolonization data to support dispersal models and how these models can be used to describe population connectivity.

  1. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  2. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  3. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  4. What Are the Most Effective Intervention Techniques for Changing Physical Activity Self-Efficacy and Physical Activity Behaviour--and Are They the Same?

    ERIC Educational Resources Information Center

    Williams, S. L.; French, D. P.

    2011-01-01

    There is convincing evidence that targeting self-efficacy is an effective means of increasing physical activity. However, evidence concerning which are the most effective techniques for changing self-efficacy and thereby physical activity is lacking. The present review aims to estimate the association between specific intervention techniques used…

  5. Short‐term time step convergence in a climate model

    PubMed Central

    Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane

    2015-01-01

    Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669

  6. Recovering the Physical Properties of Molecular Gas in Galaxies from CO SLED Modeling

    NASA Astrophysics Data System (ADS)

    Kamenetzky, J.; Privon, G. C.; Narayanan, D.

    2018-05-01

    Modeling of the spectral line energy distribution (SLED) of the CO molecule can reveal the physical conditions (temperature and density) of molecular gas in Galactic clouds and other galaxies. Recently, the Herschel Space Observatory and ALMA have offered, for the first time, a comprehensive view of the rotational J = 4‑3 through J = 13‑12 lines, which arise from a complex, diverse range of physical conditions that must be simplified to one, two, or three components when modeled. Here we investigate the recoverability of physical conditions from SLEDs produced by galaxy evolution simulations containing a large dynamical range in physical properties. These simulated SLEDs were generally fit well by one component of gas whose properties largely resemble or slightly underestimate the luminosity-weighted properties of the simulations when clumping due to nonthermal velocity dispersion is taken into account. If only modeling the first three rotational lines, the median values of the marginalized parameter distributions better represent the luminosity-weighted properties of the simulations, but the uncertainties in the fitted parameters are nearly an order of magnitude, compared to approximately 0.2 dex in the “best-case” scenario of a fully sampled SLED through J = 10‑9. This study demonstrates that while common CO SLED modeling techniques cannot reveal the underlying complexities of the molecular gas, they can distinguish bulk luminosity-weighted properties that vary with star formation surface densities and galaxy evolution, if a sufficient number of lines are detected and modeled.

  7. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  8. Using HEC-HMS: Application to Karkheh river basin

    USDA-ARS?s Scientific Manuscript database

    This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...

  9. Polarimetry

    NASA Astrophysics Data System (ADS)

    Nagendra, K. N.; Bagnulo, Stefano; Centeno, Rebecca; Jesús Martínez González, María.

    2015-08-01

    Preface; 1. Solar and stellar surface magnetic fields; 2. Future directions in astrophysical polarimetry; 3. Physical processes; 4. Instrumentation for astronomical polarimetry; 5. Data analysis techniques for polarization observations; 6. Polarization diagnostics of atmospheres and circumstellar environments; 7. Polarimetry as a tool for discovery science; 8. Numerical modeling of polarized emission; Author index.

  10. Advanced Weapon System (AWS) Sensor Prediction Techniques Study. Volume II

    DTIC Science & Technology

    1981-09-01

    models are suggested. TV. 1-1 ’ICourant Com’p’uter Sctence Report #9 December 1975 Scene Analysis: A Survey Carl Weiman Cou rant Institute of...some crucial differences. In the psycho- logical model of mechanical vision, the aim of scene analysis is to perceive and understand 2-0 images of 3-D...scenes. The meaning of this analogy can be clarified using a rudimentary informational model ; this yields a natural hierarchy from physical

  11. Comparison of Conceptual and Neural Network Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Vidyarthi, V. K.; Jain, A.

    2014-12-01

    Rainfall-runoff (RR) model is a key component of any water resource application. There are two types of techniques usually employed for RR modeling: physics based and data-driven techniques. Although the physics based models have been used for operational purposes for a very long time, they provide only reasonable accuracy in modeling and forecasting. On the other hand, the Artificial Neural Networks (ANNs) have been reported to provide superior modeling performance; however, they have not been acceptable by practitioners, decision makers and water resources engineers as operational tools. The ANNs one of the data driven techniques, became popular for efficient modeling of the complex natural systems in the last couple of decades. In this paper, the comparative results for conceptual and ANN models in RR modeling are presented. The conceptual models were developed by the use of rainfall-runoff library (RRL) and genetic algorithm (GA) was used for calibration of these models. Feed-forward neural network model structure trained by Levenberg-Marquardt (LM) training algorithm has been adopted here to develop all the ANN models. The daily rainfall, runoff and various climatic data derived from Bird creek basin, Oklahoma, USA were employed to develop all the models included here. Daily potential evapotranspiration (PET), which was used in conceptual model development, was calculated by the use of Penman equation. The input variables were selected on the basis of correlation analysis. The performance evaluation statistics such as average absolute relative error (AARE), Pearson's correlation coefficient (R) and threshold statistics (TS) were used for assessing the performance of all the models developed here. The results obtained in this study show that the ANN models outperform the conventional conceptual models due to their ability to learn the non-linearity and complexity inherent in data of rainfall-runoff process in a more efficient manner. There is a strong need to carry out such studies to prove the superiority of ANN models over conventional methods in an attempt to make them acceptable by water resources community responsible for the operation of water resources systems.

  12. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  13. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  14. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  15. Multicollinearity in associations between multiple environmental features and body weight and abdominal fat: using matching techniques to assess whether the associations are separable.

    PubMed

    Leal, Cinira; Bean, Kathy; Thomas, Frédérique; Chaix, Basile

    2012-06-01

    Because of the strong correlations among neighborhoods' characteristics, it is not clear whether the associations of specific environmental exposures (e.g., densities of physical features and services) with obesity can be disentangled. Using data from the RECORD (Residential Environment and Coronary Heart Disease) Cohort Study (Paris, France, 2007-2008), the authors investigated whether neighborhood characteristics related to the sociodemographic, physical, service-related, and social-interactional environments were associated with body mass index and waist circumference. The authors developed an original neighborhood characteristic-matching technique (analyses within pairs of participants similarly exposed to an environmental variable) to assess whether or not these associations could be disentangled. After adjustment for individual/neighborhood socioeconomic variables, body mass index/waist circumference was negatively associated with characteristics of the physical/service environments reflecting higher densities (e.g., proportion of built surface, densities of shops selling fruits/vegetables, and restaurants). Multiple adjustment models and the neighborhood characteristic-matching technique were unable to identify which of these neighborhood variables were driving the associations because of high correlations between the environmental variables. Overall, beyond the socioeconomic environment, the physical and service environments may be associated with weight status, but it is difficult to disentangle the effects of strongly correlated environmental dimensions, even if they imply different causal mechanisms and interventions.

  16. Realistic natural atmospheric phenomena and weather effects for interactive virtual environments

    NASA Astrophysics Data System (ADS)

    McLoughlin, Leigh

    Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..

  17. Magnetohydrodynamics Carreau nanofluid flow over an inclined convective heated stretching cylinder with Joule heating

    NASA Astrophysics Data System (ADS)

    Khan, Imad; Shafquatullah; Malik, M. Y.; Hussain, Arif; Khan, Mair

    Current work highlights the computational aspects of MHD Carreau nanofluid flow over an inclined stretching cylinder with convective boundary conditions and Joule heating. The mathematical modeling of physical problem yields nonlinear set of partial differential equations. A suitable scaling group of variables is employed on modeled equations to convert them into non-dimensional form. The integration scheme Runge-Kutta-Fehlberg on the behalf of shooting technique is utilized to solve attained set of equations. The interesting aspects of physical problem (linear momentum, energy and nanoparticles concentration) are elaborated under the different parametric conditions through graphical and tabular manners. Additionally, the quantities (local skin friction coefficient, local Nusselt number and local Sherwood number) which are responsible to dig out the physical phenomena in the vicinity of stretched surface are computed and delineated by varying controlling flow parameters.

  18. Methodologies in the modeling of combined chemo-radiation treatments

    NASA Astrophysics Data System (ADS)

    Grassberger, C.; Paganetti, H.

    2016-11-01

    The variety of treatment options for cancer patients has increased significantly in recent years. Not only do we combine radiation with surgery and chemotherapy, new therapeutic approaches such as immunotherapy and targeted therapies are starting to play a bigger role. Physics has made significant contributions to radiation therapy treatment planning and delivery. In particular, treatment plan optimization using inverse planning techniques has improved dose conformity considerably. Furthermore, medical physics is often the driving force behind tumor control and normal tissue complication modeling. While treatment optimization and outcome modeling does focus mainly on the effects of radiation, treatment modalities such as chemotherapy are treated independently or are even neglected entirely. This review summarizes the published efforts to model combined modality treatments combining radiation and chemotherapy. These models will play an increasing role in optimizing cancer therapy not only from a radiation and drug dosage standpoint, but also in terms of spatial and temporal optimization of treatment schedules.

  19. Censored rainfall modelling for estimation of fine-scale extremes

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  20. Modeling the directional reflectance from complete homogeneous vegetation canopies with various leaf-orientation distributions

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.

    1984-01-01

    The directional-reflectance distributions of radiant flux from homogeneous vegetation canopies with greater than 90 percent ground cover are analyzed with a radiative-transfer model. The model assumes that the leaves consist of small finite planes with Lambertian properties. Four theoretical canopies with different leaf-orientation distributions were studied: erectophile, spherical, planophile, and heliotropic canopies. The directional-reflectance distributions from the model closely resemble reflectance distributions measured in the field. The physical scattering mechanisms operating in the model explain the variations observed in the reflectance distributions as a function of leaf-orientation distribution, solar zenith angle, and leaf transmittance and reflectance. The simulated reflectance distribution show unique characteristics for each canopy. The basic understanding of the physical scattering properties of the different canopy geometries gained in this study provide a basis for developing techniques to infer leaf-orientation distributions of vegetation canopies from directional remote-sensing measurements.

  1. In vitro experimental investigation of voice production

    PubMed Central

    Horáčcek, Jaromír; Brücker, Christoph; Becker, Stefan

    2012-01-01

    The process of human phonation involves a complex interaction between the physical domains of structural dynamics, fluid flow, and acoustic sound production and radiation. Given the high degree of nonlinearity of these processes, even small anatomical or physiological disturbances can significantly affect the voice signal. In the worst cases, patients can lose their voice and hence the normal mode of speech communication. To improve medical therapies and surgical techniques it is very important to understand better the physics of the human phonation process. Due to the limited experimental access to the human larynx, alternative strategies, including artificial vocal folds, have been developed. The following review gives an overview of experimental investigations of artificial vocal folds within the last 30 years. The models are sorted into three groups: static models, externally driven models, and self-oscillating models. The focus is on the different models of the human vocal folds and on the ways in which they have been applied. PMID:23181007

  2. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  3. Finite element model correlation of a composite UAV wing using modal frequencies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.

    2007-04-01

    The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.

  4. Enhancement of Directional Ambiguity Removal Skill in Scatterometer Data Processing Using Planetary Boundary Layer Models

    NASA Technical Reports Server (NTRS)

    Kim, Young-Joon; Pak, Kyung S.; Dunbar, R. Scott; Hsiao, S. Vincent; Callahan, Philip S.

    2000-01-01

    Planetary boundary layer (PBL) models are utilized to enhance directional ambiguity removal skill in scatterometer data processing. The ambiguity in wind direction retrieved from scatterometer measurements is removed with the aid of physical directional information obtained from PBL models. This technique is based on the observation that sea level pressure is scalar and its field is more coherent than the corresponding wind. An initial wind field obtained from the scatterometer measurements is used to derive a pressure field with a PBL model. After filtering small-scale noise in the derived pressure field, a wind field is generated with an inverted PBL model. This derived wind information is then used to remove wind vector ambiguities in the scatterometer data. It is found that the ambiguity removal skill can be improved when the new technique is used properly in conjunction with the median filter being used for scatterometer wind dealiasing at JPL. The new technique is applied to regions of cyclone systems which are important for accurate weather prediction but where the errors of ambiguity removal are often large.

  5. Visiting the Gödel universe.

    PubMed

    Grave, Frank; Buser, Michael

    2008-01-01

    Visualization of general relativity illustrates aspects of Einstein's insights into the curved nature of space and time to the expert as well as the layperson. One of the most interesting models which came up with Einstein's theory was developed by Kurt Gödel in 1949. The Gödel universe is a valid solution of Einstein's field equations, making it a possible physical description of our universe. It offers remarkable features like the existence of an optical horizon beyond which time travel is possible. Although we know that our universe is not a Gödel universe, it is interesting to visualize physical aspects of a world model resulting from a theory which is highly confirmed in scientific history. Standard techniques to adopt an egocentric point of view in a relativistic world model have shortcomings with respect to the time needed to render an image as well as difficulties in applying a direct illumination model. In this paper we want to face both issues to reduce the gap between common visualization standards and relativistic visualization. We will introduce two techniques to speed up recalculation of images by means of preprocessing and lookup tables and to increase image quality through a special optimization applicable to the Gödel universe. The first technique allows the physicist to understand the different effects of general relativity faster and better by generating images from existing datasets interactively. By using the intrinsic symmetries of Gödel's spacetime which are expressed by the Killing vector field, we are able to reduce the necessary calculations to simple cases using the second technique. This even makes it feasible to account for a direct illumination model during the rendering process. Although the presented methods are applied to Gödel's universe, they can also be extended to other manifolds, for example light propagation in moving dielectric media. Therefore, other areas of research can benefit from these generic improvements.

  6. Physiotherapists use a small number of behaviour change techniques when promoting physical activity: A systematic review comparing experimental and observational studies.

    PubMed

    Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E

    2018-06-01

    Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. Optical Probe of the Density of Defect States in Organic Thin-Film Transistors

    NASA Astrophysics Data System (ADS)

    Breban, Mihaela; Romero, Danilo; Ballarotto, Vincent; Williams, Ellen

    2006-03-01

    We investigate the role of defect states associated with different gate dielectric materials on charge transport in organic thin film transistors. Using a modulation technique we measure the magnitude and the phase of the photocurrent^1 in pentacene thin film transistors as a function of the modulation frequency. The photocurrent generation process is modeled as exciton dissociation due to interaction with localized traps. A time domain analyses of this multi-step process allows us to extract the density of defect states. We use this technique to compare the physical mechanism underlying performances of pentacene devices fabricated with different dielectric materials. *Supported by the Laboratory for Physical Science ^1 M. Breban, et al. ``Photocurrent probe of field-dependent mobility in organic thin-film transistors'' Appl. Phys. Letts. 87, 203503 (2005)

  8. [A new method of fabricating photoelastic model by rapid prototyping].

    PubMed

    Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei

    2011-10-01

    To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.

  9. Simultaneous computation of jet turbulence and noise

    NASA Technical Reports Server (NTRS)

    Berman, C. H.; Ramos, J. I.

    1989-01-01

    The existing flow computation methods, wave computation techniques, and theories based on noise source models are reviewed in order to assess the capabilities of numerical techniques to compute jet turbulence noise and understand the physical mechanisms governing it over a range of subsonic and supersonic nozzle exit conditions. In particular, attention is given to (1) methods for extrapolating near field information, obtained from flow computations, to the acoustic far field and (2) the numerical solution of the time-dependent Lilley equation.

  10. Study of the structure of turbulent shear flows at supersonic speeds and high Reynolds number

    NASA Technical Reports Server (NTRS)

    Smits, A. J.; Bogdonoff, S. M.

    1984-01-01

    A major effort to improve the accuracies of turbulence measurement techniques is described including the development and testing of constant temperature hot-wire anemometers which automatically compensate for frequency responses. Calibration and data acquisition techniques for normal and inclined wires operated in the constant temperature mode, flow geometries, and physical models to explain the observed behavior of flows are discussed, as well as cooperation with computational groups in the calculation of compression corner flows.

  11. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  12. Microseismic techniques for avoiding induced seismicity during fluid injection

    DOE PAGES

    Matzel, Eric; White, Joshua; Templeton, Dennise; ...

    2014-01-01

    The goal of this research is to develop a fundamentally better approach to geological site characterization and early hazard detection. We combine innovative techniques for analyzing microseismic data with a physics-based inversion model to forecast microseismic cloud evolution. The key challenge is that faults at risk of slipping are often too small to detect during the site characterization phase. Our objective is to devise fast-running methodologies that will allow field operators to respond quickly to changing subsurface conditions.

  13. Nonlinear techniques for forecasting solar activity directly from its time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.; Cooley, J.

    1992-01-01

    Numerical techniques for constructing nonlinear predictive models to forecast solar flux directly from its time series are presented. This approach makes it possible to extract dynamical invariants of our system without reference to any underlying solar physics. We consider the dynamical evolution of solar activity in a reconstructed phase space that captures the attractor (strange), given a procedure for constructing a predictor of future solar activity, and discuss extraction of dynamical invariants such as Lyapunov exponents and attractor dimension.

  14. Nonlinear techniques for forecasting solar activity directly from its time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.; Cooley, J.

    1993-01-01

    This paper presents numerical techniques for constructing nonlinear predictive models to forecast solar flux directly from its time series. This approach makes it possible to extract dynamical in variants of our system without reference to any underlying solar physics. We consider the dynamical evolution of solar activity in a reconstructed phase space that captures the attractor (strange), give a procedure for constructing a predictor of future solar activity, and discuss extraction of dynamical invariants such as Lyapunov exponents and attractor dimension.

  15. Neutral atom traps of rare isotopes

    NASA Astrophysics Data System (ADS)

    Mueller, Peter

    2016-09-01

    Laser cooling and trapping techniques offer exquisite control of an atom's external and internal degrees of freedom. The species of interest can be selectively captured, cooled close to absolute zero temperatures, and observed with high signal-to-noise ratio. Moreover, the atom's electronic and magnetic state populations can be precisely manipulated and interrogated. Applied in nuclear physics, these techniques are ideal for precision measurements in the fields of fundamental interactions and symmetries, nuclear structure studies, and isotopic trace analysis. In particular, they offer unique opportunities in the quest for physics beyond the standard model. I will shortly review the basics of this approach and the state of the field and then cover in more details recent results from two such efforts: the search for a permanent electric dipole moment in 225Ra and the beta-neutrino angular correlation measurement with laser trapped 6He. This work is supported by the U.S. DOE, Office of Science, Office of Nuclear Physics, under Contract DE-AC02-06CH11357.

  16. Exploring the psychological processes underlying touch: lessons from the Alexander Technique.

    PubMed

    Jones, T; Glover, L

    2014-01-01

    The experience of touch is significant; both in its positive implications and in how it attracts caution and controversy. Accordingly, physical contact within psychological therapy has been shown to improve well-being and the therapeutic relationship, yet the majority of therapists never or rarely use touch. This research aimed to explore psychological processes underlying touch through the Alexander Technique, a psycho-physical technique taught one to one using touch. Six individuals who had received the Alexander Technique were interviewed, and 111 completed surveys. Interview data suggested an incompatibility between touch and the spoken word, which was understood through the way touch lacks verbal discourses in our society. The largely simplistic and dichotomous verbal understanding we have (either only very positive or very negative) could help understand some of the societal-level caution surrounding touch. Touch was seen also as a nurturing experience by interviewees, which influenced inter-personal and intra-personal relational processes. Developmental models were used to frame the way touch strengthened the pupil-teacher relationship and the way pupils' intra-personal psychological change seemed linked to this relational experience. The surveys largely supported these findings, and discussion is made around the notable way pupils negatively interpreted the intention of the survey. Implications for the use of touch in psychological therapies are discussed, as are limitations and ideas for future research. Touch is a powerful experience, and physical contact within psychological therapy has been shown to improve well-being and the therapeutic relationship, yet the majority of therapists never or rarely use touch. The AT is an alternative therapeutic approach to psycho-physical well-being that offers an interesting model to study the impact of touch. Findings from those that have used the technique reaffirmed that touch can improve well-being and can be a powerful force in the 'therapeutic relationship'. Accounts drew strong parallels with developmental experiences, which may be of particular interest to those working psychodynamically. Findings also highlighted the lack of discourses our culture has for touch and how the ones we share can be super-imposed onto experiences. This should be kept in mind when discussing all types of physical contact with clients. Outcomes from AT pupils cannot be generalized to those seeking psychological support; however, the findings accentuated the power of holistic working. This is important as we begin to understand more around how emotions are held in the body. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping

    2015-09-15

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less

  18. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  19. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  20. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  1. Operational Space Weather Models: Trials, Tribulations and Rewards

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Thompson, D. C.; Zhu, L.

    2009-12-01

    There are many empirical, physics-based, and data assimilation models that can probably be used for space weather applications and the models cover the entire domain from the surface of the Sun to the Earth’s surface. At Utah State University we developed two physics-based data assimilation models of the terrestrial ionosphere as part of a program called Global Assimilation of Ionospheric Measurements (GAIM). One of the data assimilation models is now in operational use at the Air Force Weather Agency (AFWA) in Omaha, Nebraska. This model is a Gauss-Markov Kalman Filter (GAIM-GM) model, and it uses a physics-based model of the ionosphere and a Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) measurements. The physics-based model is the Ionosphere Forecast Model (IFM), which is global and covers the E-region, F-region, and topside ionosphere from 90 to 1400 km. It takes account of five ion species (NO+, O2+, N2+, O+, H+), but the main output of the model is a 3-dimensional electron density distribution at user specified times. The second data assimilation model uses a physics-based Ionosphere-Plasmasphere Model (IPM) and an ensemble Kalman filter technique as a basis for assimilating a diverse set of real-time (or near real-time) measurements. This Full Physics model (GAIM-FP) is global, covers the altitude range from 90 to 30,000 km, includes six ions (NO+, O2+, N2+, O+, H+, He+), and calculates the self-consistent ionospheric drivers (electric fields and neutral winds). The GAIM-FP model is scheduled for delivery in 2012. Both of these GAIM models assimilate bottom-side Ne profiles from a variable number of ionosondes, slant TEC from a variable number of ground GPS/TEC stations, in situ Ne from four DMSP satellites, line-of-sight UV emissions measured by satellites, and occultation data. Quality control algorithms for all of the data types are provided as an integral part of the GAIM models and these models take account of latent data (up to 3 hours). The trials, tribulations and rewards of constructing and maintaining operational data assimilation models will be discussed.

  2. Spindle speed variation technique in turning operations: Modeling and real implementation

    NASA Astrophysics Data System (ADS)

    Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.

    2016-11-01

    Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed ​​Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.

  3. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  4. Graphene growth process modeling: a physical-statistical approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  5. A Stress Management Classroom Tool for Teachers of Children with BD.

    ERIC Educational Resources Information Center

    Jackson, James T.; Owens, James L.

    1999-01-01

    This article discusses how stress may affect the lives of children with behavior disorders, provides educators with a model for introducing stress management techniques, and closes with strategies for managing stress in the classroom, including listening to relaxing music, manipulating the environment, and providing a morning physical education…

  6. Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development

    ERIC Educational Resources Information Center

    Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.

    2010-01-01

    This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…

  7. Differentiation in Outcome-Focused Physical Education: Pedagogical Rhetoric and Reality

    ERIC Educational Resources Information Center

    Whipp, Peter; Taggart, Andrew; Jackson, Ben

    2014-01-01

    Background: This study was grounded in the differentiated instructional model where teachers tailor content, process/support, and product in response to their students' levels of readiness and interest. The value of differentiated teaching is well established; however, the implementation of such a technique is difficult due to differences in…

  8. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  9. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  10. Implementing a Tactical Approach through Action Research

    ERIC Educational Resources Information Center

    Gubacs-Collins, Klara

    2007-01-01

    Background: Influenced by the original observations of Bunker and Thorpe, physical education theorists began to question the effectiveness of a traditional model for teaching games and have increasingly begun to believe that concentrating only on specific motor responses (techniques) fails to take into account the contextual nature of games. Games…

  11. Trends in analytical techniques applied to particulate matter characterization: A critical review of fundaments and applications.

    PubMed

    Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael

    2018-05-01

    Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Physical and virtual laboratories in science and engineering education.

    PubMed

    de Jong, Ton; Linn, Marcia C; Zacharia, Zacharias C

    2013-04-19

    The world needs young people who are skillful in and enthusiastic about science and who view science as their future career field. Ensuring that we will have such young people requires initiatives that engage students in interesting and motivating science experiences. Today, students can investigate scientific phenomena using the tools, data collection techniques, models, and theories of science in physical laboratories that support interactions with the material world or in virtual laboratories that take advantage of simulations. Here, we review a selection of the literature to contrast the value of physical and virtual investigations and to offer recommendations for combining the two to strengthen science learning.

  13. Anisotropic physical properties of myocardium characterized by ultrasonic measurements of backscatter, attenuation, and velocity

    NASA Astrophysics Data System (ADS)

    Baldwin, Steven L.

    The goal of elucidating the physical mechanisms underlying the propagation of ultrasonic waves in anisotropic soft tissue such as myocardium has posed an interesting and largely unsolved problem in the field of physics for the past 30 years. In part because of the vast complexity of the system being studied, progress towards understanding and modeling the mechanisms that underlie observed acoustic parameters may first require the guidance of careful experiment. Knowledge of the causes of observed ultrasonic properties in soft tissue including attenuation, speed of sound, and backscatter, and how those properties are altered with specific pathophysiologies, may lead to new noninvasive approaches to the diagnosis of disease. The primary aim of this Dissertation is to contribute to an understanding of the physics that underlies the mechanisms responsible for the observed interaction of ultrasound with myocardium. To this end, through-transmission and backscatter measurements were performed by varying acoustic properties as a function of angle of insonification relative to the predominant myofiber direction and by altering the material properties of myocardium by increased protein cross-linking induced by chemical fixation as an extreme form of changes that may occur in certain pathologies such as diabetes. Techniques to estimate acoustic parameters from backscatter were broadened and challenges to implementing these techniques in vivo were addressed. Provided that specific challenges identified in this Dissertation can be overcome, techniques to estimate attenuation from ultrasonic backscatter show promise as a means to investigate the physical interaction of ultrasound with anisotropic biological media in vivo. This Dissertation represents a step towards understanding the physics of the interaction of ultrasonic waves with anisotropic biological media.

  14. An advanced technique for the prediction of decelerator system dynamics.

    NASA Technical Reports Server (NTRS)

    Talay, T. A.; Morris, W. D.; Whitlock, C. H.

    1973-01-01

    An advanced two-body six-degree-of-freedom computer model employing an indeterminate structures approach has been developed for the parachute deployment process. The program determines both vehicular and decelerator responses to aerodynamic and physical property inputs. A better insight into the dynamic processes that occur during parachute deployment has been developed. The model is of value in sensitivity studies to isolate important parameters that affect the vehicular response.

  15. Modelling urban rainfall-runoff responses using an experimental, two-tiered physical modelling environment

    NASA Astrophysics Data System (ADS)

    Green, Daniel; Pattison, Ian; Yu, Dapeng

    2016-04-01

    Surface water (pluvial) flooding occurs when rainwater from intense precipitation events is unable to infiltrate into the subsurface or drain via natural or artificial drainage channels. Surface water flooding poses a serious hazard to urban areas across the world, with the UK's perceived risk appearing to have increased in recent years due to surface water flood events seeming more severe and frequent. Surface water flood risk currently accounts for 1/3 of all UK flood risk, with approximately two million people living in urban areas at risk of a 1 in 200-year flood event. Research often focuses upon using numerical modelling techniques to understand the extent, depth and severity of actual or hypothetical flood scenarios. Although much research has been conducted using numerical modelling, field data available for model calibration and validation is limited due to the complexities associated with data collection in surface water flood conditions. Ultimately, the data which numerical models are based upon is often erroneous and inconclusive. Physical models offer a novel, alternative and innovative environment to collect data within, creating a controlled, closed system where independent variables can be altered independently to investigate cause and effect relationships. A physical modelling environment provides a suitable platform to investigate rainfall-runoff processes occurring within an urban catchment. Despite this, physical modelling approaches are seldom used in surface water flooding research. Scaled laboratory experiments using a 9m2, two-tiered 1:100 physical model consisting of: (i) a low-cost rainfall simulator component able to simulate consistent, uniformly distributed (>75% CUC) rainfall events of varying intensity, and; (ii) a fully interchangeable, modular plot surface have been conducted to investigate and quantify the influence of a number of terrestrial and meteorological factors on overland flow and rainfall-runoff patterns within a modelled urban setting. Terrestrial factors investigated include altering the physical model's catchment slope (0°- 20°), as well as simulating a number of spatially-varied impermeability and building density/configuration scenarios. Additionally, the influence of different storm dynamics and intensities were investigated. Preliminary results demonstrate that rainfall-runoff responses in the physical modelling environment are highly sensitive to slight increases in catchment gradient and rainfall intensity and that more densely distributed building layouts significantly increase peak flows recorded at the physical model outflow when compared to sparsely distributed building layouts under comparable simulated rainfall conditions.

  16. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  17. CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.

  18. Surface Tension and Viscosity of SCN and SCN-acetone Alloys at Melting Points and Higher Temperatures Using Surface Light Scattering Spectrometer

    NASA Technical Reports Server (NTRS)

    Tin, Padetha; deGroh, Henry C., III.

    2003-01-01

    Succinonitrile has been and is being used extensively in NASA's Microgravity Materials Science and Fluid Physics programs and as well as in several ground-based and microgravity studies including the Isothermal Dendritic Growth Experiment (IDGE). Succinonitrile (SCN) is useful as a model for the study of metal solidification, although it is an organic material, it has a BCC crystal structure and solidifies dendriticly like a metal. It is also transparent and has a low melting point (58.08 C). Previous measurements of succinonitrile (SCN) and alloys of succinonitrile and acetone surface tensions are extremely limited. Using the Surface Light Scattering technique we have determined non invasively, the surface tension and viscosity of SCN and SCN-Acetone Alloys at different temperatures. This relatively new and unique technique has several advantages over the classical methods such as, it is non invasive, has good accuracy and measures the surface tension and viscosity simultaneously. The accuracy of interfacial energy values obtained from this technique is better than 2% and viscosity about 10 %. Succinonitrile and succinonitrile-acetone alloys are well-established model materials with several essential physical properties accurately known - except the liquid/vapor surface tension at different elevated temperatures. We will be presenting the experimentally determined liquid/vapor surface energy and liquid viscosity of succinonitrile and succinonitrile-acetone alloys in the temperature range from their melting point to around 100 C using this non-invasive technique. We will also discuss about the measurement technique and new developments of the Surface Light Scattering Spectrometer.

  19. Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning

    PubMed Central

    Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.

    2011-01-01

    We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788

  20. Creating physically-based three-dimensional microstructures: Bridging phase-field and crystal plasticity models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Owen, Steven J.; Abdeljawad, Fadi F.

    In order to better incorporate microstructures in continuum scale models, we use a novel finite element (FE) meshing technique to generate three-dimensional polycrystalline aggregates from a phase field grain growth model of grain microstructures. The proposed meshing technique creates hexahedral FE meshes that capture smooth interfaces between adjacent grains. Three dimensional realizations of grain microstructures from the phase field model are used in crystal plasticity-finite element (CP-FE) simulations of polycrystalline a -iron. We show that the interface conformal meshes significantly reduce artificial stress localizations in voxelated meshes that exhibit the so-called "wedding cake" interfaces. This framework provides a direct linkmore » between two mesoscale models - phase field and crystal plasticity - and for the first time allows mechanics simulations of polycrystalline materials using three-dimensional hexahedral finite element meshes with realistic topological features.« less

  1. Modeling of endoluminal and interstitial ultrasound hyperthermia and thermal ablation: applications to device design, feedback control, and treatment planning

    PubMed Central

    Prakash, Punit; Salgaonkar, Vasant A.; Diederich, Chris J.

    2014-01-01

    Endoluminal and catheter-based ultrasound applicators are currently under development and are in clinical use for minimally invasive hyperthermia and thermal ablation of various tissue targets. Computational models play a critical role in in device design and optimization, assessment of therapeutic feasibility and safety, devising treatment monitoring and feedback control strategies, and performing patient-specific treatment planning with this technology. The critical aspects of theoretical modeling, applied specifically to endoluminal and interstitial ultrasound thermotherapy, are reviewed. Principles and practical techniques for modeling acoustic energy deposition, bioheat transfer, thermal tissue damage, and dynamic changes in the physical and physiological state of tissue are reviewed. The integration of these models and applications of simulation techniques in identification of device design parameters, development of real time feedback-control platforms, assessing the quality and safety of treatment delivery strategies, and optimization of inverse treatment plans are presented. PMID:23738697

  2. [Beginning of the institutionalization of physical therapy in a Swiss canton: 1928-1945].

    PubMed

    Hasler, Véronique

    2013-01-01

    The institutionalization of physical therapy in Switzerland took place in the inter-war period. This article aims to relate the initiation of this process in the Canton of Vaud, as a specific example that will nevertheless be compared with the Swiss and international contexts. This story occurs around three major events between 1928 and 1945: the massage becomes a regulated profession, followed by the emergence of a professional association and a specialized school. The intention is first to identify the social actors, then the interests, issues, and interactions that have contributed to model the modern physical therapy. Finally, the techniques used by the masseurs--the first professional physical therapists--and their working environment are evoked.

  3. Rad4 recognition-at-a-distance: Physical basis of conformation-specific anomalous diffusion of DNA repair proteins.

    PubMed

    Kong, Muwen; Van Houten, Bennett

    2017-08-01

    Since Robert Brown's first observations of random walks by pollen particles suspended in solution, the concept of diffusion has been subject to countless theoretical and experimental studies in diverse fields from finance and social sciences, to physics and biology. Diffusive transport of macromolecules in cells is intimately linked to essential cellular functions including nutrient uptake, signal transduction, gene expression, as well as DNA replication and repair. Advancement in experimental techniques has allowed precise measurements of these diffusion processes. Mathematical and physical descriptions and computer simulations have been applied to model complicated biological systems in which anomalous diffusion, in addition to simple Brownian motion, was observed. The purpose of this review is to provide an overview of the major physical models of anomalous diffusion and corresponding experimental evidence on the target search problem faced by DNA-binding proteins, with an emphasis on DNA repair proteins and the role of anomalous diffusion in DNA target recognition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  5. Novel schemes for measurement-based quantum computation.

    PubMed

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  6. Direct modeling parameter signature analysis and failure mode prediction of physical systems using hybrid computer optimization

    NASA Technical Reports Server (NTRS)

    Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.

    1971-01-01

    High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.

  7. Collective cell migration: a physics perspective

    NASA Astrophysics Data System (ADS)

    Hakim, Vincent; Silberzan, Pascal

    2017-07-01

    Cells have traditionally been viewed either as independently moving entities or as somewhat static parts of tissues. However, it is now clear that in many cases, multiple cells coordinate their motions and move as collective entities. Well-studied examples comprise development events, as well as physiological and pathological situations. Different ex vivo model systems have also been investigated. Several recent advances have taken place at the interface between biology and physics, and have benefitted from progress in imaging and microscopy, from the use of microfabrication techniques, as well as from the introduction of quantitative tools and models. We review these interesting developments in quantitative cell biology that also provide rich examples of collective out-of-equilibrium motion.

  8. Schrödinger Approach to Mean Field Games

    NASA Astrophysics Data System (ADS)

    Swiecicki, Igor; Gobron, Thierry; Ullmo, Denis

    2016-03-01

    Mean field games (MFG) provide a theoretical frame to model socioeconomic systems. In this Letter, we study a particular class of MFG that shows strong analogies with the nonlinear Schrödinger and Gross-Pitaevskii equations introduced in physics to describe a variety of physical phenomena. Using this bridge, many results and techniques developed along the years in the latter context can be transferred to the former, which provides both a new domain of application for the nonlinear Schrödinger equation and a new and fruitful approach in the study of mean field games. Utilizing this approach, we analyze in detail a population dynamics model in which the "players" are under a strong incentive to coordinate themselves.

  9. WE-D-303-01: Development and Application of Digital Human Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segars, P.

    2015-06-15

    Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less

  10. Characterization and Higher-Order Structure Assessment of an Interchain Cysteine-Based ADC: Impact of Drug Loading and Distribution on the Mechanism of Aggregation.

    PubMed

    Guo, Jianxin; Kumar, Sandeep; Chipley, Mark; Marcq, Olivier; Gupta, Devansh; Jin, Zhaowei; Tomar, Dheeraj S; Swabowski, Cecily; Smith, Jacquelynn; Starkey, Jason A; Singh, Satish K

    2016-03-16

    The impact of drug loading and distribution on higher order structure and physical stability of an interchain cysteine-based antibody drug conjugate (ADC) has been studied. An IgG1 mAb was conjugated with a cytotoxic auristatin payload following the reduction of interchain disulfides. The 2-D LC-MS analysis shows that there is a preference for certain isomers within the various drug to antibody ratios (DARs). The physical stability of the unconjugated monoclonal antibody, the ADC, and isolated conjugated species with specific DAR, were compared using calorimetric, thermal, chemical denaturation and molecular modeling techniques, as well as techniques to assess hydrophobicity. The DAR was determined to have a significant impact on the biophysical properties and stability of the ADC. The CH2 domain was significantly perturbed in the DAR6 species, which was attributable to quaternary structural changes as assessed by molecular modeling. At accelerated storage temperatures, the DAR6 rapidly forms higher molecular mass species, whereas the DAR2 and the unconjugated mAb were largely stable. Chemical denaturation study indicates that DAR6 may form multimers while DAR2 and DAR4 primarily exist in monomeric forms in solution at ambient conditions. The physical state differences were correlated with a dramatic increase in the hydrophobicity and a reduction in the surface tension of the DAR6 compared to lower DAR species. Molecular modeling of the various DAR species and their conformers demonstrates that the auristatin-based linker payload directly contributes to the hydrophobicity of the ADC molecule. Higher order structural characterization provides insight into the impact of conjugation on the conformational and colloidal factors that determine the physical stability of cysteine-based ADCs, with implications for process and formulation development.

  11. Improving catchment scale water quality modelling with continuous high resolution monitoring of metals in runoff

    NASA Astrophysics Data System (ADS)

    Saari, Markus; Rossi, Pekka; Blomberg von der Geest, Kalle; Mäkinen, Ari; Postila, Heini; Marttila, Hannu

    2017-04-01

    High metal concentrations in natural waters is one of the key environmental and health problems globally. Continuous in-situ analysis of metals from runoff water is technically challenging but essential for the better understanding of processes which lead to pollutant transport. Currently, typical analytical methods for monitoring elements in liquids are off-line laboratory methods such as ICP-OES (Inductively Coupled Plasma Optical Emission Spectroscopy) and ICP-MS (ICP combined with a mass spectrometer). Disadvantage of the both techniques is time consuming sample collection, preparation, and off-line analysis at laboratory conditions. Thus use of these techniques lack possibility for real-time monitoring of element transport. We combined a novel high resolution on-line metal concentration monitoring with catchment scale physical hydrological modelling in Mustijoki river in Southern Finland in order to study dynamics of processes and form a predictive warning system for leaching of metals. A novel on-line measurement technique based on micro plasma emission spectroscopy (MPES) is tested for on-line detection of selected elements (e.g. Na, Mg, Al, K, Ca, Fe, Ni, Cu, Cd and Pb) in runoff waters. The preliminary results indicate that MPES can sufficiently detect and monitor metal concentrations from river water. Water and Soil Assessment Tool (SWAT) catchment scale model was further calibrated with high resolution metal concentration data. We show that by combining high resolution monitoring and catchment scale physical based modelling, further process studies and creation of early warning systems, for example to optimization of drinking water uptake from rivers, can be achieved.

  12. A computational chemistry perspective on the current status and future direction of hepatitis B antiviral drug discovery.

    PubMed

    Morgnanesi, Dante; Heinrichs, Eric J; Mele, Anthony R; Wilkinson, Sean; Zhou, Suzanne; Kulp, John L

    2015-11-01

    Computational chemical biology, applied to research on hepatitis B virus (HBV), has two major branches: bioinformatics (statistical models) and first-principle methods (molecular physics). While bioinformatics focuses on statistical tools and biological databases, molecular physics uses mathematics and chemical theory to study the interactions of biomolecules. Three computational techniques most commonly used in HBV research are homology modeling, molecular docking, and molecular dynamics. Homology modeling is a computational simulation to predict protein structure and has been used to construct conformers of the viral polymerase (reverse transcriptase domain and RNase H domain) and the HBV X protein. Molecular docking is used to predict the most likely orientation of a ligand when it is bound to a protein, as well as determining an energy score of the docked conformation. Molecular dynamics is a simulation that analyzes biomolecule motions and determines conformation and stability patterns. All of these modeling techniques have aided in the understanding of resistance mutations on HBV non-nucleos(t)ide reverse-transcriptase inhibitor binding. Finally, bioinformatics can be used to study the DNA and RNA protein sequences of viruses to both analyze drug resistance and to genotype the viral genomes. Overall, with these techniques, and others, computational chemical biology is becoming more and more necessary in hepatitis B research. This article forms part of a symposium in Antiviral Research on "An unfinished story: from the discovery of the Australia antigen to the development of new curative therapies for hepatitis B." Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  14. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    PubMed Central

    Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David

    2016-01-01

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698038

  15. Towards a physics-based multiscale modelling of the electro-mechanical coupling in electro-active polymers.

    PubMed

    Cohen, Noy; Menzel, Andreas; deBotton, Gal

    2016-02-01

    Owing to the increasing number of industrial applications of electro-active polymers (EAPs), there is a growing need for electromechanical models which accurately capture their behaviour. To this end, we compare the predicted behaviour of EAPs undergoing homogeneous deformations according to three electromechanical models. The first model is a phenomenological continuum-based model composed of the mechanical Gent model and a linear relationship between the electric field and the polarization. The electrical and the mechanical responses according to the second model are based on the physical structure of the polymer chain network. The third model incorporates a neo-Hookean mechanical response and a physically motivated microstructurally based long-chains model for the electrical behaviour. In the microstructural-motivated models, the integration from the microscopic to the macroscopic levels is accomplished by the micro-sphere technique. Four types of homogeneous boundary conditions are considered and the behaviours determined according to the three models are compared. For the microstructurally motivated models, these analyses are performed and compared with the widely used phenomenological model for the first time. Some of the aspects revealed in this investigation, such as the dependence of the intensity of the polarization field on the deformation, highlight the need for an in-depth investigation of the relationships between the structure and the behaviours of the EAPs at the microscopic level and their overall macroscopic response.

  16. Learning planar Ising models

    DOE PAGES

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  17. Learning planar Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  18. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  19. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Edwards, J.; Haan, S. W.; Robey, H. F.; Milovich, J.; Spears, B. K.; Weber, S. V.; Clark, D. S.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.; Atherton, J.; Amendt, P. A.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Frenje, J. A.; Glenzer, S. H.; Hamza, A.; Hammel, B. A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kilkenny, J. D.; Kirkwood, R. K.; Kline, J. L.; Kyrala, G. A.; Marinak, M. M.; Meezan, N.; Meyerhofer, D. D.; Michel, P.; Munro, D. H.; Olson, R. E.; Nikroo, A.; Regan, S. P.; Suter, L. J.; Thomas, C. A.; Wilson, D. C.

    2011-05-01

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analytic models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.

  20. Modelling digital thunder

    NASA Astrophysics Data System (ADS)

    Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco

    2009-01-01

    An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of the mathematical model, together with typical results and suggestions for additional developments are discussed. The paper is intended as a teaching aid for students and teachers in the context of introductory physics courses at university level.

  1. Physical properties of galaxies: towards a consistent comparison between hydrodynamical simulations and SDSS

    NASA Astrophysics Data System (ADS)

    Guidi, Giovanni; Scannapieco, Cecilia; Walcher, Jakob; Gallazzi, Anna

    2016-10-01

    We study the effects of applying observational techniques to derive the properties of simulated galaxies, with the aim of making an unbiased comparison between observations and simulations. For our study, we used 15 galaxies simulated in a cosmological context using three different feedback and chemical enrichment models, and compared their z = 0 properties with data from the Sloan Digital Sky Survey (SDSS). We show that the physical properties obtained directly from the simulations without post-processing can be very different from those obtained mimicking observational techniques. In order to provide simulators a way to reliably compare their galaxies with SDSS data, for each physical property that we studied - colours, magnitudes, gas and stellar metallicities, mean stellar ages and star formation rates - we give scaling relations that can be easily applied to the values extracted from the simulations; these scalings have in general a high correlation, except for the gas oxygen metallicities. Our simulated galaxies are photometrically similar to galaxies in the blue sequence/green valley, but in general they appear older, passive and with lower metal content compared to most of the spirals in SDSS. As a careful assessment of the agreement/disagreement with observations is the primary test of the baryonic physics implemented in hydrodynamical codes, our study shows that considering the observational biases in the derivation of the galaxies' properties is of fundamental importance to decide on the failure/success of a galaxy formation model.

  2. An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.

    2005-01-01

    Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.

  3. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert P. Lucht

    Laser-induced polarization spectroscopy (LIPS), degenerate four-wave mixing (DFWM), and electronic-resonance-enhanced (ERE) coherent anti-Stokes Raman scattering (CARS) are techniques that shows great promise for sensitive measurements of transient gas-phase species, and diagnostic applications of these techniques are being pursued actively at laboratories throughout the world. However, significant questions remain regarding strategies for quantitative concentration measurements using these techniques. The primary objective of this research program is to develop and test strategies for quantitative concentration measurements in flames and plasmas using these nonlinear optical techniques. Theoretically, we are investigating the physics of these processes by direct numerical integration (DNI) of the time-dependentmore » density matrix equations that describe the wave-mixing interaction. Significantly fewer restrictive assumptions are required when the density matrix equations are solved using this DNI approach compared with the assumptions required to obtain analytical solutions. For example, for LIPS calculations, the Zeeman state structure and hyperfine structure of the resonance and effects such as Doppler broadening can be included. There is no restriction on the intensity of the pump and probe beams in these nonperturbative calculations, and both the pump and probe beam intensities can be high enough to saturate the resonance. As computer processing speeds have increased, we have incorporated more complicated physical models into our DNI codes. During the last project period we developed numerical methods for nonperturbative calculations of the two-photon absorption process. Experimentally, diagnostic techniques are developed and demonstrated in gas cells and/or well-characterized flames for ease of comparison with model results. The techniques of two-photon, two-color H-atom LIPS and three-laser ERE CARS for NO and C{sub 2}H{sub 2} were demonstrated during the project period, and nonperturbative numerical models of both of these techniques were developed. In addition, we developed new single-mode, injection-seeded optical parametric laser sources (OPLSs) that will be used to replace multi-mode commercial dye lasers in our experimental measurements. The use of single-mode laser radiation in our experiments will increase significantly the rigor with which theory and experiment are compared.« less

  5. A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Meldi, M.; Poux, A.

    2017-10-01

    A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.

  6. Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.

    PubMed

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon

    2017-10-01

    The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.

  7. Metallic Rotor Sizing and Performance Model for Flywheel Systems

    NASA Technical Reports Server (NTRS)

    Moore, Camille J.; Kraft, Thomas G.

    2012-01-01

    The NASA Glenn Research Center (GRC) is developing flywheel system requirements and designs for terrestrial and spacecraft applications. Several generations of flywheels have been designed and tested at GRC using in-house expertise in motors, magnetic bearings, controls, materials and power electronics. The maturation of a flywheel system from the concept phase to the preliminary design phase is accompanied by maturation of the Integrated Systems Performance model, where estimating relationships are replaced by physics based analytical techniques. The modeling can incorporate results from engineering model testing and emerging detail from the design process.

  8. Effective techniques in healthy eating and physical activity interventions: a meta-regression.

    PubMed

    Michie, Susan; Abraham, Charles; Whittington, Craig; McAteer, John; Gupta, Sunjai

    2009-11-01

    Meta-analyses of behavior change (BC) interventions typically find large heterogeneity in effectiveness and small effects. This study aimed to assess the effectiveness of active BC interventions designed to promote physical activity and healthy eating and investigate whether theoretically specified BC techniques improve outcome. Interventions, evaluated in experimental or quasi-experimental studies, using behavioral and/or cognitive techniques to increase physical activity and healthy eating in adults, were systematically reviewed. Intervention content was reliably classified into 26 BC techniques and the effects of individual techniques, and of a theoretically derived combination of self-regulation techniques, were assessed using meta-regression. Valid outcomes of physical activity and healthy eating. The 122 evaluations (N = 44,747) produced an overall pooled effect size of 0.31 (95% confidence interval = 0.26 to 0.36, I(2) = 69%). The technique, "self-monitoring," explained the greatest amount of among-study heterogeneity (13%). Interventions that combined self-monitoring with at least one other technique derived from control theory were significantly more effective than the other interventions (0.42 vs. 0.26). Classifying interventions according to component techniques and theoretically derived technique combinations and conducting meta-regression enabled identification of effective components of interventions designed to increase physical activity and healthy eating. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  9. Modeling Photo-Bleaching Kinetics to Create High Resolution Maps of Rod Rhodopsin in the Human Retina

    PubMed Central

    Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.

    2015-01-01

    We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397

  10. Microwave Remote Sensing Modeling of Ocean Surface Salinity and Winds Using an Empirical Sea Surface Spectrum

    NASA Technical Reports Server (NTRS)

    Yueh, Simon H.

    2004-01-01

    Active and passive microwave remote sensing techniques have been investigated for the remote sensing of ocean surface wind and salinity. We revised an ocean surface spectrum using the CMOD-5 geophysical model function (GMF) for the European Remote Sensing (ERS) C-band scatterometer and the Ku-band GMF for the NASA SeaWinds scatterometer. The predictions of microwave brightness temperatures from this model agree well with satellite, aircraft and tower-based microwave radiometer data. This suggests that the impact of surface roughness on microwave brightness temperatures and radar scattering coefficients of sea surfaces can be consistently characterized by a roughness spectrum, providing physical basis for using combined active and passive remote sensing techniques for ocean surface wind and salinity remote sensing.

  11. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  12. COSP: Satellite simulation software for model assessment

    DOE PAGES

    Bodas-Salcedo, A.; Webb, M. J.; Bony, S.; ...

    2011-08-01

    Errors in the simulation of clouds in general circulation models (GCMs) remain a long-standing issue in climate projections, as discussed in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report. This highlights the need for developing new analysis techniques to improve our knowledge of the physical processes at the root of these errors. The Cloud Feedback Model Intercomparison Project (CFMIP) pursues this objective, and under that framework the CFMIP Observation Simulator Package (COSP) has been developed. COSP is a flexible software tool that enables the simulation of several satellite-borne active and passive sensor observations from model variables. The flexibilitymore » of COSP and a common interface for all sensors facilitates its use in any type of numerical model, from high-resolution cloud-resolving models to the coarser-resolution GCMs assessed by the IPCC, and the scales in between used in weather forecast and regional models. The diversity of model parameterization techniques makes the comparison between model and observations difficult, as some parameterized variables (e.g., cloud fraction) do not have the same meaning in all models. The approach followed in COSP permits models to be evaluated against observations and compared against each other in a more consistent manner. This thus permits a more detailed diagnosis of the physical processes that govern the behavior of clouds and precipitation in numerical models. The World Climate Research Programme (WCRP) Working Group on Coupled Modelling has recommended the use of COSP in a subset of climate experiments that will be assessed by the next IPCC report. Here we describe COSP, present some results from its application to numerical models, and discuss future work that will expand its capabilities.« less

  13. Detection and characterization of corrosion of bridge cables by time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Hunsperger, Robert G.; Folliard, Kevin; Chajes, Michael J.; Barot, Jignesh; Jhaveri, Darshan; Kunz, Eric

    1999-02-01

    In this paper, we develop and demonstrate a nondestructive evaluation technique for corrosion detection of embedded or encased steel cables. This technique utilizes time domain reflectometry (TDR), which has been traditionally used to detect electrical discontinuities in transmission lines. By applying a sensor wire along with the bridge cable, we can model the cable as an asymmetric, twin-conductor transmission line. Physical defects of the bridge cable will change the electromagnetic properties of the line and can be detected by TDR. Furthermore, different types of defects can be modeled analytically, and identified using TDR. TDR measurement results from several fabricated bridge cable sections with built-in defects are reported.

  14. An accurate computational method for an order parameter with a Markov state model constructed using a manifold-learning technique

    NASA Astrophysics Data System (ADS)

    Ito, Reika; Yoshidome, Takashi

    2018-01-01

    Markov state models (MSMs) are a powerful approach for analyzing the long-time behaviors of protein motion using molecular dynamics simulation data. However, their quantitative performance with respect to the physical quantities is poor. We believe that this poor performance is caused by the failure to appropriately classify protein conformations into states when constructing MSMs. Herein, we show that the quantitative performance of an order parameter is improved when a manifold-learning technique is employed for the classification in the MSM. The MSM construction using the K-center method, which has been previously used for classification, has a poor quantitative performance.

  15. Selective laser sintering in biomedical engineering.

    PubMed

    Mazzoli, Alida

    2013-03-01

    Selective laser sintering (SLS) is a solid freeform fabrication technique, developed by Carl Deckard for his master's thesis at the University of Texas, patented in 1989. SLS manufacturing is a technique that produces physical models through a selective solidification of a variety of fine powders. SLS technology is getting a great amount of attention in the clinical field. In this paper the characteristics features of SLS and the materials that have been developed for are reviewed together with a discussion on the principles of the above-mentioned manufacturing technique. The applications of SLS in tissue engineering, and at-large in the biomedical field, are reviewed and discussed.

  16. External trial deep brain stimulation device for the application of desynchronizing stimulation techniques.

    PubMed

    Hauptmann, C; Roulet, J-C; Niederhauser, J J; Döll, W; Kirlangic, M E; Lysyansky, B; Krachkovskyi, V; Bhatti, M A; Barnikol, U B; Sasse, L; Bührle, C P; Speckmann, E-J; Götz, M; Sturm, V; Freund, H-J; Schnell, U; Tass, P A

    2009-12-01

    In the past decade deep brain stimulation (DBS)-the application of electrical stimulation to specific target structures via implanted depth electrodes-has become the standard treatment for medically refractory Parkinson's disease and essential tremor. These diseases are characterized by pathological synchronized neuronal activity in particular brain areas. We present an external trial DBS device capable of administering effectively desynchronizing stimulation techniques developed with methods from nonlinear dynamics and statistical physics according to a model-based approach. These techniques exploit either stochastic phase resetting principles or complex delayed-feedback mechanisms. We explain how these methods are implemented into a safe and user-friendly device.

  17. Constraining Data Mining with Physical Models: Voltage- and Oxygen Pressure-Dependent Transport in Multiferroic Nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strelcov, Evgheni; Belianinov, Alexei; Hsieh, Ying-Hui

    Development of new generation electronic devices requires understanding and controlling the electronic transport in ferroic, magnetic, and optical materials, which is hampered by two factors. First, the complications of working at the nanoscale, where interfaces, grain boundaries, defects, and so forth, dictate the macroscopic characteristics. Second, the convolution of the response signals stemming from the fact that several physical processes may be activated simultaneously. Here, we present a method of solving these challenges via a combination of atomic force microscopy and data mining analysis techniques. Rational selection of the latter allows application of physical constraints and enables direct interpretation ofmore » the statistically significant behaviors in the framework of the chosen physical model, thus distilling physical meaning out of raw data. We demonstrate our approach with an example of deconvolution of complex transport behavior in a bismuth ferrite–cobalt ferrite nanocomposite in ambient and ultrahigh vacuum environments. Measured signal is apportioned into four electronic transport patterns, showing different dependence on partial oxygen and water vapor pressure. These patterns are described in terms of Ohmic conductance and Schottky emission models in the light of surface electrochemistry. Finally and furthermore, deep data analysis allows extraction of local dopant concentrations and barrier heights empowering our understanding of the underlying dynamic mechanisms of resistive switching.« less

  18. Implementation multi representation and oral communication skills in Department of Physics Education on Elementary Physics II

    NASA Astrophysics Data System (ADS)

    Kusumawati, Intan; Marwoto, Putut; Linuwih, Suharto

    2015-09-01

    The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative-quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM-learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.

  19. Constraining Data Mining with Physical Models: Voltage- and Oxygen Pressure-Dependent Transport in Multiferroic Nanostructures

    DOE PAGES

    Strelcov, Evgheni; Belianinov, Alexei; Hsieh, Ying-Hui; ...

    2015-08-27

    Development of new generation electronic devices requires understanding and controlling the electronic transport in ferroic, magnetic, and optical materials, which is hampered by two factors. First, the complications of working at the nanoscale, where interfaces, grain boundaries, defects, and so forth, dictate the macroscopic characteristics. Second, the convolution of the response signals stemming from the fact that several physical processes may be activated simultaneously. Here, we present a method of solving these challenges via a combination of atomic force microscopy and data mining analysis techniques. Rational selection of the latter allows application of physical constraints and enables direct interpretation ofmore » the statistically significant behaviors in the framework of the chosen physical model, thus distilling physical meaning out of raw data. We demonstrate our approach with an example of deconvolution of complex transport behavior in a bismuth ferrite–cobalt ferrite nanocomposite in ambient and ultrahigh vacuum environments. Measured signal is apportioned into four electronic transport patterns, showing different dependence on partial oxygen and water vapor pressure. These patterns are described in terms of Ohmic conductance and Schottky emission models in the light of surface electrochemistry. Finally and furthermore, deep data analysis allows extraction of local dopant concentrations and barrier heights empowering our understanding of the underlying dynamic mechanisms of resistive switching.« less

  20. (?) The Air Force Geophysics Laboratory: Aeronomy, aerospace instrumentation, space physics, meteorology, terrestrial sciences and optical physics

    NASA Astrophysics Data System (ADS)

    McGinty, A. B.

    1982-04-01

    Contents: The Air Force Geophysics Laboratory; Aeronomy Division--Upper Atmosphere Composition, Middle Atmosphere Effects, Atmospheric UV Radiation, Satellite Accelerometer Density Measurement, Theoretical Density Studies, Chemical Transport Models, Turbulence and Forcing Functions, Atmospheric Ion Chemistry, Energy Budget Campaign, Kwajalein Reference Atmospheres, 1979, Satellite Studies of the Neutral Atmosphere, Satellite Studies of the Ionosphere, Aerospace Instrumentation Division--Sounding Rocket Program, Satellite Support, Rocket and Satellite Instrumentation; Space Physics Division--Solar Research, Solar Radio Research, Environmental Effects on Space Systems, Solar Proton Event Studies, Defense Meteorological Satellite Program, Ionospheric Effects Research, Spacecraft Charging Technology; Meteorology Division--Cloud Physics, Ground-Based Remote-Sensing Techniques, Mesoscale Observing and Forecasting, Design Climatology, Aircraft Icing Program, Atmospheric Dynamics; Terrestrial Sciences Division--Geodesy and Gravity, Geokinetics; Optical Physics Division--Atmospheric Transmission, Remote Sensing, INfrared Background; and Appendices.

  1. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    DOE PAGES

    Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...

    2008-01-01

    Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less

  2. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  3. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  4. A didactic proposal about Rutherford backscattering spectrometry with theoretic, experimental, simulation and application activities

    NASA Astrophysics Data System (ADS)

    Corni, Federico; Michelini, Marisa

    2018-01-01

    Rutherford backscattering spectrometry is a nuclear analysis technique widely used for materials science investigation. Despite the strict technical requirements to perform the data acquisition, the interpretation of a spectrum is within the reach of general physics students. The main phenomena occurring during a collision between helium ions—with energy of a few MeV—and matter are: elastic nuclear collision, elastic scattering, and, in the case of non-surface collision, ion stopping. To interpret these phenomena, we use classical physics models: material point elastic collision, unscreened Coulomb scattering, and inelastic energy loss of ions with electrons, respectively. We present the educational proposal for Rutherford backscattering spectrometry, within the framework of the model of educational reconstruction, following a rationale that links basic physics concepts with quantities for spectra analysis. This contribution offers the opportunity to design didactic specific interventions suitable for undergraduate and secondary school students.

  5. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons

    NASA Astrophysics Data System (ADS)

    Ulmer, S.; Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.

    2018-03-01

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.

  6. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons.

    PubMed

    Ulmer, S; Mooser, A; Nagahama, H; Sellner, S; Smorra, C

    2018-03-28

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Authors.

  7. Challenging the standard model by high-precision comparisons of the fundamental properties of protons and antiprotons

    PubMed Central

    Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.

    2018-01-01

    The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge–parity–time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue ‘Antiproton physics in the ELENA era’. PMID:29459414

  8. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  9. Evaluating the Theoretical Content of Online Physical Activity Information for People with Multiple Sclerosis

    PubMed Central

    Baillie, Colin P.T.; Galaviz, Karla; Jarvis, Jocelyn W.; Latimer-Cheung, Amy E.

    2015-01-01

    Background: Physical activity can aid people with multiple sclerosis (MS) in managing symptoms and maintaining functional abilities. The Internet is a preferred source of physical activity information for people with MS and, therefore, a method for the dissemination of behavior change techniques. The purpose of this study was to examine the coverage and quality of physical activity behavior change techniques delivered on the Internet for adults with MS using Abraham and Michie's taxonomy of behavior change techniques. Methods: Using the taxonomy, 20 websites were coded for quality (ie, accuracy of information) and coverage (ie, completeness of information) of theoretical behavior change techniques. Results: Results indicated that most websites covered a mean of 8.05 (SD 3.86, range 3–16) techniques out of a possible 20. Only one of the techniques, provide information on behavior–health link and consequences, was delivered on all websites. The websites demonstrated low mean coverage and quality across all behavior change techniques, with means of 0.64 (SD 0.67) and 0.62 (SD 0.37) on a scale of 0 to 2, respectively. However, coverage and quality improved when websites were examined solely for the techniques that they covered, as opposed to all 20 techniques. Conclusions: This study, which examined quality and coverage of physical activity behavior change techniques described online for people with MS, illustrated that the dissemination of these techniques requires improvement. PMID:25892979

  10. Higher-order chromatin structure: bridging physics and biology.

    PubMed

    Fudenberg, Geoffrey; Mirny, Leonid A

    2012-04-01

    Advances in microscopy and genomic techniques have provided new insight into spatial chromatin organization inside of the nucleus. In particular, chromosome conformation capture data has highlighted the relevance of polymer physics for high-order chromatin organization. In this context, we review basic polymer states, discuss how an appropriate polymer model can be determined from experimental data, and examine the success and limitations of various polymer models of higher-order interphase chromatin organization. By taking into account topological constraints acting on the chromatin fiber, recently developed polymer models of interphase chromatin can reproduce the observed scaling of distances between genomic loci, chromosomal territories, and probabilities of contacts between loci measured by chromosome conformation capture methods. Polymer models provide a framework for the interpretation of experimental data as ensembles of conformations rather than collections of loops, and will be crucial for untangling functional implications of chromosomal organization. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Higher order chromatin structure: bridging physics and biology

    PubMed Central

    Fudenberg, Geoffrey; Mirny, Leonid A.

    2012-01-01

    Recent advances in microscopy and genomic techniques have provided new insight into spatial chromatin organization inside of the nucleus. In particular, chromosome conformation capture data has highlighted the relevance of polymer physics for high-order chromatin organization. In this context, we review basic polymer states, discuss how an appropriate polymer model can be determined from experimental data, and examine the success and limitations of various polymer models of high-order interphase chromatin organization. By taking into account topological constraints acting on the chromatin fiber, recently-developed polymer models of interphase chromatin can reproduce the observed scaling of distances between genomic loci, chromosomal territories, and probabilities of contacts between loci measured by chromosome conformation capture methods. Polymer models provide a framework for the interpretation of experimental data as ensembles of conformations rather than collections of loops, and will be crucial for untangling functional implications of chromosomal organization. PMID:22360992

  12. Modelling of physical influences in sea level records for vertical crustal movement detection

    NASA Technical Reports Server (NTRS)

    Anderson, E. G.

    1978-01-01

    Attempts to specify and evaluate such physical influences are reviewed with the intention of identifying problem areas and promising approaches. An example of linear modelling based on air/water temperatures, atmospheric pressure, river discharges, geostrophic and/or local wind velocities, and including forced period terms to allow for the long period tides and Chandlerian polar motion is evaluated and applied to monthly mean sea levels recorded in Atlantic Canada. Refinement of the model to admit phase lag in the response to some of the driving phenomena is demonstrated. Spectral analysis of the residuals is employed to assess the model performance. The results and associated statistical parameters are discussed with emphasis on elucidating the sensitivity of the technique for detection of local episodic and secular vertical crustal movements, the problem areas most critical to the type of approach, and possible further developments.

  13. a Speculative Study on Negative-Dimensional Potential and Wave Problems by Implicit Calculus Modeling Approach

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Wang, Fajie

    Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.

  14. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2014-05-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.

  15. Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.

    2017-12-01

    One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.

  16. Effect of different mixing methods on the physical properties of Portland cement.

    PubMed

    Shahi, Shahriar; Ghasemi, Negin; Rahimi, Saeed; Yavari, Hamidreza; Samiei, Mohammad; Jafari, Farnaz

    2016-12-01

    The Portland cement is hydrophilic cement; as a result, the powder-to-liquid ratio affects the properties of the final mix. In addition, the mixing technique affects hydration. The aim of this study was to evaluate the effect of different mixing techniques (conventional, amalgamator and ultrasonic) on some selective physical properties of Portland cement. The physical properties to be evaluated were determined using the ISO 6786:2001 specification. One hundred sixty two samples of Portland cement were prepared for three mixing techniques for each physical property (each 6 samples). Data were analyzed using descriptive statistics, one-way ANOVA and post hoc Tukey tests. Statistical significance was set at P <0.05. The mixing technique had no significant effect on the compressive strength, film thickness and flow of Portland cement ( P >0.05). Dimensional changes (shrinkage), solubility and pH increased significantly by amalgamator and ultrasonic mixing techniques ( P <0.05). The ultrasonic technique significantly decreased working time, and the amalgamator and ultrasonic techniques significantly decreased the setting time ( P <0.05). The mixing technique exerted no significant effect on the flow, film thickness and compressive strength of Portland cement samples. Key words: Physical properties, Portland cement, mixing methods.

  17. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  18. A high-frequency warm shallow water acoustic communications channel model and measurements.

    PubMed

    Chitre, Mandar

    2007-11-01

    Underwater acoustic communication is a core enabling technology with applications in ocean monitoring using remote sensors and autonomous underwater vehicles. One of the more challenging underwater acoustic communication channels is the medium-range very shallow warm-water channel, common in tropical coastal regions. This channel exhibits two key features-extensive time-varying multipath and high levels of non-Gaussian ambient noise due to snapping shrimp-both of which limit the performance of traditional communication techniques. A good understanding of the communications channel is key to the design of communication systems. It aids in the development of signal processing techniques as well as in the testing of the techniques via simulation. In this article, a physics-based channel model for the very shallow warm-water acoustic channel at high frequencies is developed, which are of interest to medium-range communication system developers. The model is based on ray acoustics and includes time-varying statistical effects as well as non-Gaussian ambient noise statistics observed during channel studies. The model is calibrated and its accuracy validated using measurements made at sea.

  19. Design and Development of Basic Physical Layer WiMAX Network Simulation Models

    DTIC Science & Technology

    2009-01-01

    Wide Web . The third software version was developed during the period of 22 August to 4 November, 2008. The software version developed during the...researched on the Web . The mathematics of some fundamental concepts such as Fourier transforms, convolutional coding techniques were also reviewed...Mathworks Matlab users’ website. A simulation model was found, entitled Estudio y Simulacion de la capa Jisica de la norma 802.16 ( Sistema WiMAX) developed

  20. Computational split-field finite-difference time-domain evaluation of simplified tilt-angle models for parallel-aligned liquid-crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Álvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2018-03-01

    Simplified analytical models with predictive capability enable simpler and faster optimization of the performance in applications of complex photonic devices. We recently demonstrated the most simplified analytical model still showing predictive capability for parallel-aligned liquid crystal on silicon (PA-LCoS) devices, which provides the voltage-dependent retardance for a very wide range of incidence angles and any wavelength in the visible. We further show that the proposed model is not only phenomenological but also physically meaningful, since two of its parameters provide the correct values for important internal properties of these devices related to the birefringence, cell gap, and director profile. Therefore, the proposed model can be used as a means to inspect internal physical properties of the cell. As an innovation, we also show the applicability of the split-field finite-difference time-domain (SF-FDTD) technique for phase-shift and retardance evaluation of PA-LCoS devices under oblique incidence. As a simplified model for PA-LCoS devices, we also consider the exact description of homogeneous birefringent slabs. However, we show that, despite its higher degree of simplification, the proposed model is more robust, providing unambiguous and physically meaningful solutions when fitting its parameters.

  1. Understanding Femtosecond-Pulse Laser Damage through Fundamental Physics Simulations

    NASA Astrophysics Data System (ADS)

    Mitchell, Robert A., III

    It did not take long after the invention of the laser for the field of laser damage to appear. For several decades researchers have been studying how lasers damage materials, both for the basic scientific understanding of highly nonequilibrium processes as well as for industrial applications. Femtosecond pulse lasers create little collateral damage and a readily reproducible damage pattern. They are easily tailored to desired specifications and are particularly powerful and versatile tools, contributing even more industrial interest in the field. As with most long-standing fields of research, many theoretical tools have been developed to model the laser damage process, covering a wide range of complexities and regimes of applicability. However, most of the modeling methods developed are either too limited in spatial extent to model the full morphology of the damage crater, or incorporate only a small subset of the important physics and require numerous fitting parameters and assumptions in order to match values interpolated from experimental data. Demonstrated in this work is the first simulation method capable of fundamentally modeling the full laser damage process, from the laser interaction all the way through to the resolidification of the target, on a large enough scale that can capture the full morphology of the laser damage crater so as to be compared directly to experimental measurements instead of extrapolated values, and all without any fitting parameters. The design, implementation, and testing of this simulation technique, based on a modified version of the particle-in-cell (PIC) method, is presented. For a 60 fs, 1 mum wavelength laser pulse with fluences of 0.5 J/cm 2, 1.0 J/cm2, and 2.0 J/cm2 the resulting laser damage craters in copper are shown and, using the same technique applied to experimental crater morphologies, a laser damage fluence threshold is calculated of 0.15 J/cm2, consistent with current experiments performed under conditions similar to those in the simulation. Lastly, this method is applied to the phenomenon known as LIPSS, or Laser-Induced Periodic Surface Structures; a problem of fundamental importance that is also of great interest for industrial applications. While LIPSS have been observed for decades in laser damage experiments, the exact physical mechanisms leading to the periodic corrugation on the surface of a target have been highly debated, with no general consensus. Applying this technique to a situation known to create LIPSS in a single shot, the generation of this periodicity is observed, the wavelength of the damage is consistent with experimental measures and, due to the fundamental nature of the simulation method, the physical mechanisms behind LIPSS are examined. The mechanism behind LIPSS formation in the studied regime is shown to be the formation of and interference with an evanescent surface electromagnetic wave known as a surface plasmon-polariton. This shows that not only can this simulation technique model a basic laser damage situation, but it is also flexible and powerful enough to be applied to complex areas of research, allowing for new physical insight in regimes that are difficult to probe experimentally.

  2. Predictors for Physical Activity in Adolescent Girls Using Statistical Shrinkage Techniques for Hierarchical Longitudinal Mixed Effects Models

    PubMed Central

    Grant, Edward M.; Young, Deborah Rohm; Wu, Tong Tong

    2015-01-01

    We examined associations among longitudinal, multilevel variables and girls’ physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland) contributed participants from 8th (2009) to 11th grade (2011) (n=561). Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables); height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school’s physical activity environment, affect adolescent girl’s moderate to vigorous physical activity longitudinally. PMID:25928064

  3. Reactive solute transport in streams: 1. Development of an equilibrium- based model

    USGS Publications Warehouse

    Runkel, Robert L.; Bencala, Kenneth E.; Broshears, Robert E.; Chapra, Steven C.

    1996-01-01

    An equilibrium-based solute transport model is developed for the simulation of trace metal fate and transport in streams. The model is formed by coupling a solute transport model with a chemical equilibrium submodel based on MINTEQ. The solute transport model considers the physical processes of advection, dispersion, lateral inflow, and transient storage, while the equilibrium submodel considers the speciation and complexation of aqueous species, precipitation/dissolution and sorption. Within the model, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (water-borne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach.

  4. Advanced Ground Systems Maintenance Physics Models For Diagnostics Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.

    2015-01-01

    The project will use high-fidelity physics models and simulations to simulate real-time operations of cryogenic and systems and calculate the status/health of the systems. The project enables the delivery of system health advisories to ground system operators. The capability will also be used to conduct planning and analysis of cryogenic system operations. This project will develop and implement high-fidelity physics-based modeling techniques tosimulate the real-time operation of cryogenics and other fluids systems and, when compared to thereal-time operation of the actual systems, provide assessment of their state. Physics-modelcalculated measurements (called “pseudo-sensors”) will be compared to the system real-timedata. Comparison results will be utilized to provide systems operators with enhanced monitoring ofsystems' health and status, identify off-nominal trends and diagnose system/component failures.This capability can also be used to conduct planning and analysis of cryogenics and other fluidsystems designs. This capability will be interfaced with the ground operations command andcontrol system as a part of the Advanced Ground Systems Maintenance (AGSM) project to helpassure system availability and mission success. The initial capability will be developed for theLiquid Oxygen (LO2) ground loading systems.

  5. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  6. The Use of Behavior Change Techniques and Theory in Technologies for Cardiovascular Disease Prevention and Treatment in Adults: A Comprehensive Review

    PubMed Central

    Winter, Sandra J; Sheats, Jylana L; King, Abby C

    2016-01-01

    This review examined the use of health behavior change techniques and theory in technology-enabled interventions targeting risk factors and indicators for cardiovascular disease (CVD) prevention and treatment. Articles targeting physical activity, weight loss, smoking cessation and management of hypertension, lipids and blood glucose were sourced from PubMed (November 2010-2015) and coded for use of 1) technology, 2) health behavior change techniques (using the CALO-RE taxonomy), and 3) health behavior theories. Of the 984 articles reviewed, 304 were relevant (240=intervention, 64=review). Twenty-two different technologies were used (M=1.45, SD=+/−0.719). The most frequently used behavior change techniques were self-monitoring and feedback on performance (M=5.4, SD=+/−2.9). Half (52%) of the intervention studies named a theory/model - most frequently Social Cognitive Theory, the Trans-theoretical Model, and the Theory of Planned Behavior/Reasoned Action. To optimize technology-enabled interventions targeting CVD risk factors, integrated behavior change theories that incorporate a variety of evidence-based health behavior change techniques are needed. PMID:26902519

  7. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  8. Integration of Dynamic Models in Range Operations

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge; Thirumalainambi, Rajkumar

    2004-01-01

    This work addresses the various model interactions in real-time to make an efficient internet based decision making tool for Shuttle launch. The decision making tool depends on the launch commit criteria coupled with physical models. Dynamic interaction between a wide variety of simulation applications and techniques, embedded algorithms, and data visualizations are needed to exploit the full potential of modeling and simulation. This paper also discusses in depth details of web based 3-D graphics and applications to range safety. The advantages of this dynamic model integration are secure accessibility and distribution of real time information to other NASA centers.

  9. Use of Hypnosis in Self-Esteem Building: Review and Model for Rehabilitation.

    ERIC Educational Resources Information Center

    Klich, Beatriz de M.; Miller, Mary Ball

    Hypnotherapeutic approaches in helping physically disabled patients cope with stress and plan further goals during the rehabilitation period are discussed. Several techniques possible in a rehabilitation setting are presented, including integration of ego strengthening and self-esteem building. It is noted that, in rehabilitation, a major goal of…

  10. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  11. A Brief History of Publishing Papers on Astronomy Education Research

    ERIC Educational Resources Information Center

    Fraknoi, Andrew

    2014-01-01

    While some research had been done on K-12 and planetarium astronomy teaching from the 1930's to the 1980's, the growth of research on college physics education offered astronomy education researchers a model for examining techniques for teaching introductory college astronomy survey "Astronomy 101" courses as well. This early research…

  12. A Multi-Level Model of Moral Thinking Based on Neuroscience and Moral Psychology

    ERIC Educational Resources Information Center

    Jeong, Changwoo; Han, Hye Min

    2011-01-01

    Developments in neurobiology are providing new insights into the biological and physical features of human thinking, and brain-activation imaging methods such as functional magnetic resonance imaging have become the most dominant research techniques to approach the biological part of thinking. With the aid of neurobiology, there also have been…

  13. Nanolayered microlenses in theory and practice

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; Andrews, James; Oder, Tom; Zhou, Chuanhong; Merlo, Cory; Hetzel, Connor; Bagheri, Cameron; Petrus, Joshua; Mazzocco, Anthony

    2014-05-01

    Co-extruded layered polymer films with structurally designed optical dispersion are used as ``blanks'' from which micro lenses have been fabricated using grey-scale photo-lithography followed by plasma etching. We describe the materials and processing as well as techniques used to characterize the micro lenses and the physical optics theory used to model their measured behavior.

  14. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1988-01-01

    Research focused on two major areas. The first effort addressed the design and implementation of a technique that allows for the visualization of the real time variation of physical properties. The second effort focused on the design and implementation of an on-line help system with components designed for both authors and users of help information.

  15. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  16. Polymer physics experiments with single DNA molecules

    NASA Astrophysics Data System (ADS)

    Smith, Douglas E.

    1999-11-01

    Bacteriophage DNA molecules were taken as a model flexible polymer chain for the experimental study of polymer dynamics at the single molecule level. Video fluorescence microscopy was used to directly observe the conformational dynamics of fluorescently labeled molecules, optical tweezers were used to manipulate individual molecules, and micro-fabricated flow cells were used to apply controlled hydrodynamic strain to molecules. These techniques constitute a powerful new experimental approach in the study of basic polymer physics questions. I have used these techniques to study the diffusion and relaxation of isolated and entangled polymer molecules and the hydrodynamic deformation of polymers in elongational and shear flows. These studies revealed a rich, and previously unobserved, ``molecular individualism'' in the dynamical behavior of single molecules. Individual measurements on ensembles of identical molecules allowed the average conformation to be determined as well as the underlying probability distributions for molecular conformation. Scaling laws, that predict the dependence of properties on chain length and concentration, were also tested. The basic assumptions of the reptation model were directly confirmed by visualizing the dynamics of entangled chains.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  18. Physics Beyond the Standard Model: Exotic Leptons and Black Holes at Future Colliders

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.

    2005-02-01

    The Standard Model of particle physics has been remarkably successful in describing present experimental results. However, it is assumed to be only a low-energy effective theory which will break down at higher energy scales, theoretically motivated to be around 1 TeV. There are a variety of proposed models of new physics beyond the Standard Model, most notably supersymmetric and extra dimension models. New charged and neutral heavy leptons are a feature of a number of theories of new physics, including the `intermediate scale' class of supersymmetric models. Using a time-of-flight technique to detect the charged leptons at the Large Hadron Collider, the discovery range (in the particular scenario studied in the first part of this thesis) is found to extend up to masses of 950 GeV. Extra dimension models, particularly those with large extra dimensions, allow the possible experimental production of black holes. The remainder of the thesis describes some theoretical results and computational tools necessary to model the production and decay of these miniature black holes at future particle colliders. The grey-body factors which describe the Hawking radiation emitted by higher-dimensional black holes are calculated numerically for the first time and then incorporated in a Monte Carlo black hole event generator; this can be used to model black hole production and decay at next-generation colliders. It is hoped that this generator will allow more detailed examination of black hole signatures and help to devise a method for extracting the number of extra dimensions present in nature.

  19. Numerical Solution of the Electron Heat Transport Equation and Physics-Constrained Modeling of the Thermal Conductivity via Sequential Quadratic Programming Optimization in Nuclear Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Paloma, Cynthia S.

    The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.

  20. Searching for new physics with three-particle correlations in pp collisions at the LHC

    NASA Astrophysics Data System (ADS)

    Sanchis-Lozano, Miguel-Angel; Sarkisyan-Grinbaum, Edward K.

    2018-06-01

    New phenomena involving pseudorapidity and azimuthal correlations among final-state particles in pp collisions at the LHC can hint at the existence of hidden sectors beyond the Standard Model. In this paper we rely on a correlated-cluster picture of multiparticle production, which was shown to account for the ridge effect, to assess the effect of a hidden sector on three-particle correlations concluding that there is a potential signature of new physics that can be directly tested by experiments using well-known techniques.

Top