NASA Astrophysics Data System (ADS)
Lei, Li
1999-07-01
In this study the researcher develops and presents a new model, founded on the laws of physics, for analyzing dance technique. Based on a pilot study of four advanced dance techniques, she creates a new model for diagnosing, analyzing and describing basic, intermediate and advanced dance techniques. The name for this model is ``PED,'' which stands for Physics of Expressive Dance. The research design consists of five phases: (1) Conduct a pilot study to analyze several advanced dance techniques chosen from Chinese dance, modem dance, and ballet; (2) Based on learning obtained from the pilot study, create the PED Model for analyzing dance technique; (3) Apply this model to eight categories of dance technique; (4) Select two advanced dance techniques from each category and analyze these sample techniques to demonstrate how the model works; (5) Develop an evaluation framework and use it to evaluate the effectiveness of the model, taking into account both scientific and artistic aspects of dance training. In this study the researcher presents new solutions to three problems highly relevant to dance education: (1) Dancers attempting to learn difficult movements often fail because they are unaware of physics laws; (2) Even those who do master difficult movements can suffer injury due to incorrect training methods; (3) Even the best dancers can waste time learning by trial and error, without scientific instruction. In addition, the researcher discusses how the application of the PED model can benefit dancers, allowing them to avoid inefficient and ineffective movements and freeing them to focus on the artistic expression of dance performance. This study is unique, presenting the first comprehensive system for analyzing dance techniques in terms of physics laws. The results of this study are useful, allowing a new level of awareness about dance techniques that dance professionals can utilize for more effective and efficient teaching and learning. The approach utilized in this study is universal, and can be applied to any dance movement and to any dance style.
Nonlinear ultrasonics for material state awareness
NASA Astrophysics Data System (ADS)
Jacobs, L. J.
2014-02-01
Predictive health monitoring of structural components will require the development of advanced sensing techniques capable of providing quantitative information on the damage state of structural materials. By focusing on nonlinear acoustic techniques, it is possible to measure absolute, strength based material parameters that can then be coupled with uncertainty models to enable accurate and quantitative life prediction. Starting at the material level, this review will present current research that involves a combination of sensing techniques and physics-based models to characterize damage in metallic materials. In metals, these nonlinear ultrasonic measurements can sense material state, before the formation of micro- and macro-cracks. Typically, cracks of a measurable size appear quite late in a component's total life, while the material's integrity in terms of toughness and strength gradually decreases due to the microplasticity (dislocations) and associated change in the material's microstructure. This review focuses on second harmonic generation techniques. Since these nonlinear acoustic techniques are acoustic wave based, component interrogation can be performed with bulk, surface and guided waves using the same underlying material physics; these nonlinear ultrasonic techniques provide results which are independent of the wave type used. Recent physics-based models consider the evolution of damage due to dislocations, slip bands, interstitials, and precipitates in the lattice structure, which can lead to localized damage.
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, O.
The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is amore » polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.« less
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals
NASA Technical Reports Server (NTRS)
Skelton, R. T.; Mahoney, W. A.
1993-01-01
We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2016-05-01
Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
Deformation-based augmented reality for hepatic surgery.
Haouchine, Nazim; Dequidt, Jérémie; Berger, Marie-Odile; Cotin, Stéphane
2013-01-01
In this paper we introduce a method for augmenting the laparoscopic view during hepatic tumor resection. Using augmented reality techniques, vessels, tumors and cutting planes computed from pre-operative data can be overlaid onto the laparoscopic video. Compared to current techniques, which are limited to a rigid registration of the pre-operative liver anatomy with the intra-operative image, we propose a real-time, physics-based, non-rigid registration. The main strength of our approach is that the deformable model can also be used to regularize the data extracted from the computer vision algorithms. We show preliminary results on a video sequence which clearly highlights the interest of using physics-based model for elastic registration.
Employment of adaptive learning techniques for the discrimination of acoustic emissions
NASA Astrophysics Data System (ADS)
Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.
1983-11-01
The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.
Mastery Learning in Physical Education.
ERIC Educational Resources Information Center
Annarino, Anthony
This paper discusses the design of a physical education curriculum to be used in advanced secondary physical education programs and in university basic instructional programs; the design is based on the premise of mastery learning and employs programed instructional techniques. The effective implementation of a mastery learning model necessitates…
Integrated Formulation of Beacon-Based Exception Analysis for Multimissions
NASA Technical Reports Server (NTRS)
Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail
2003-01-01
Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J V; Chambers, D H; Breitfeller, E F
2010-03-02
The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less
Nurmi, Johanna; Hagger, Martin S; Haukkala, Ari; Araújo-Soares, Vera; Hankonen, Nelli
2016-04-01
This study tested the predictive validity of a multitheory process model in which the effect of autonomous motivation from self-determination theory on physical activity participation is mediated by the adoption of self-regulatory techniques based on control theory. Finnish adolescents (N = 411, aged 17-19) completed a prospective survey including validated measures of the predictors and physical activity, at baseline and after one month (N = 177). A subsample used an accelerometer to objectively measure physical activity and further validate the physical activity self-report assessment tool (n = 44). Autonomous motivation statistically significantly predicted action planning, coping planning, and self-monitoring. Coping planning and self-monitoring mediated the effect of autonomous motivation on physical activity, although self-monitoring was the most prominent. Controlled motivation had no effect on self-regulation techniques or physical activity. Developing interventions that support autonomous motivation for physical activity may foster increased engagement in self-regulation techniques and positively affect physical activity behavior.
Exploring Physics in the Classroom
ERIC Educational Resources Information Center
Amann, George
2005-01-01
The key to learning is student involvement! This American Association of Physics Teachers/Physics Teaching Resource Agents (AAPT/PTRA) manual presents examples of two techniques that are proven to increase student involvement in your classroom. Based on the "5E" model of learning, exploratories are designed to get your students excited about the…
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
NASA Astrophysics Data System (ADS)
Luks, B.; Osuch, M.; Romanowicz, R. J.
2012-04-01
We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
NASA Technical Reports Server (NTRS)
Hajj, G. A.; Wilson, B. D.; Wang, C.; Pi, X.; Rosen, I. G.
2004-01-01
A three-dimensional (3-D) Global Assimilative Ionospheric Model (GAIM) is currently being developed by a joint University of Southern California and Jet Propulsion Laboratory (JPL) team. To estimate the electron density on a global grid, GAIM uses a first-principles ionospheric physics model and the Kalman filter as one of its possible estimation techniques.
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
A Model of the Creative Process Based on Quantum Physics and Vedic Science.
ERIC Educational Resources Information Center
Rose, Laura Hall
1988-01-01
Using tenets from Vedic science and quantum physics, this model of the creative process suggests that the unified field of creation is pure consciousness, and that the development of the creative process within individuals mirrors the creative process within the universe. Rational and supra-rational creative thinking techniques are also described.…
Physics-based interactive volume manipulation for sharing surgical process.
Nakao, Megumi; Minato, Kotaro
2010-05-01
This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.
Tu, Jia-Ying; Hsiao, Wei-De; Chen, Chih-Ying
2014-01-01
Testing techniques of dynamically substructured systems dissects an entire engineering system into parts. Components can be tested via numerical simulation or physical experiments and run synchronously. Additional actuator systems, which interface numerical and physical parts, are required within the physical substructure. A high-quality controller, which is designed to cancel unwanted dynamics introduced by the actuators, is important in order to synchronize the numerical and physical outputs and ensure successful tests. An adaptive forward prediction (AFP) algorithm based on delay compensation concepts has been proposed to deal with substructuring control issues. Although the settling performance and numerical conditions of the AFP controller are improved using new direct-compensation and singular value decomposition methods, the experimental results show that a linear dynamics-based controller still outperforms the AFP controller. Based on experimental observations, the least-squares fitting technique, effectiveness of the AFP compensation and differences between delay and ordinary differential equations are discussed herein, in order to reflect the fundamental issues of actuator modelling in relevant literature and, more specifically, to show that the actuator and numerical substructure are heterogeneous dynamic components and should not be collectively modelled as a homogeneous delay differential equation. PMID:25104902
NASA Astrophysics Data System (ADS)
Henderson, Charles; Yerushalmi, Edit; Kuo, Vince H.; Heller, Kenneth; Heller, Patricia
2007-12-01
To identify and describe the basis upon which instructors make curricular and pedagogical decisions, we have developed an artifact-based interview and an analysis technique based on multilayered concept maps. The policy capturing technique used in the interview asks instructors to make judgments about concrete instructional artifacts similar to those they likely encounter in their teaching environment. The analysis procedure alternatively employs both an a priori systems view analysis and an emergent categorization to construct a multilayered concept map, which is a hierarchically arranged set of concept maps where child maps include more details than parent maps. Although our goal was to develop a model of physics faculty beliefs about the teaching and learning of problem solving in the context of an introductory calculus-based physics course, the techniques described here are applicable to a variety of situations in which instructors make decisions that influence teaching and learning.
Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments
NASA Technical Reports Server (NTRS)
Manning, Ted A.; Lawrence, Scott L.
2014-01-01
As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
An uncertainty model of acoustic metamaterials with random parameters
NASA Astrophysics Data System (ADS)
He, Z. C.; Hu, J. Y.; Li, Eric
2018-01-01
Acoustic metamaterials (AMs) are man-made composite materials. However, the random uncertainties are unavoidable in the application of AMs due to manufacturing and material errors which lead to the variance of the physical responses of AMs. In this paper, an uncertainty model based on the change of variable perturbation stochastic finite element method (CVPS-FEM) is formulated to predict the probability density functions of physical responses of AMs with random parameters. Three types of physical responses including the band structure, mode shapes and frequency response function of AMs are studied in the uncertainty model, which is of great interest in the design of AMs. In this computation, the physical responses of stochastic AMs are expressed as linear functions of the pre-defined random parameters by using the first-order Taylor series expansion and perturbation technique. Then, based on the linear function relationships of parameters and responses, the probability density functions of the responses can be calculated by the change-of-variable technique. Three numerical examples are employed to demonstrate the effectiveness of the CVPS-FEM for stochastic AMs, and the results are validated by Monte Carlo method successfully.
ERIC Educational Resources Information Center
Gerdin, Göran; Pringle, Richard
2017-01-01
Kirk warns that physical education (PE) exists in a precarious situation as the dominance of the multi-activity sport-techniques model, and its associated problems, threatens the long-term educational survival of PE. Yet he also notes that although the model is problematic it is highly resistant to change. In this paper, we draw on the results of…
Model based Computerized Ionospheric Tomography in space and time
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2018-04-01
Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.
Molecular-Based Optical Measurement Techniques for Transition and Turbulence in High-Speed Flow
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Cutler, Andrew D.
2013-01-01
High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and photogrammetry (for model attitude and deformation measurement) are excluded to limit the scope of this report. Other physical probes such as heat flux gauges, total temperature probes are also excluded. We further exclude measurement techniques that require particle seeding though particle based methods may still be useful in many high speed flow applications. This manuscript details some of the more widely used molecular-based measurement techniques for studying transition and turbulence: laser-induced fluorescence (LIF), Rayleigh and Raman Scattering and coherent anti-Stokes Raman scattering (CARS). These techniques are emphasized, in part, because of the prior experience of the authors. Additional molecular based techniques are described, albeit in less detail. Where possible, an effort is made to compare the relative advantages and disadvantages of the various measurement techniques, although these comparisons can be subjective views of the authors. Finally, the manuscript concludes by evaluating the different measurement techniques in view of the precision requirements described in this chapter. Additional requirements and considerations are discussed to assist with choosing an optical measurement technique for a given application.
Using Technology to Facilitate and Enhance Project-based Learning in Mathematical Physics
NASA Astrophysics Data System (ADS)
Duda, Gintaras
2011-04-01
Problem-based and project-based learning are two pedagogical techniques that have several clear advantages over traditional instructional methods: 1) both techniques are active and student centered, 2) students confront real-world and/or highly complex problems, and 3) such exercises model the way science and engineering are done professionally. This talk will present an experiment in project/problem-based learning in a mathematical physics course. The group project in the course involved modeling a zombie outbreak of the type seen in AMC's ``The Walking Dead.'' Students researched, devised, and solved their mathematical models for the spread of zombie-like infection. Students used technology in all stages; in fact, since analytical solutions to the models were often impossible, technology was a necessary and critical component of the challenge. This talk will explore the use of technology in general in problem and project-based learning and will detail some specific examples of how technology was used to enhance student learning in this course. A larger issue of how students use the Internet to learn will also be explored.
Anderson, Kyle; Segall, Paul
2013-01-01
Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.
Passive Optical Technique to Measure Physical Properties of a Vibrating Surface
2014-01-01
it is not necessary to understand the details of a non-Lambertian BRDF to detect surface vibration phenomena, an accurate model incorporating physics...summarize the discussion of BRDF , while a physics-based BRDF model is not necessary to use scattered light as a surface vibration diagnostic, it may...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2014 2
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
NASA Astrophysics Data System (ADS)
Havens, S.; Marks, D. G.; Kormos, P.; Hedrick, A. R.; Johnson, M.; Robertson, M.; Sandusky, M.
2017-12-01
In the Western US, operational water supply managers rely on statistical techniques to forecast the volume of water left to enter the reservoirs. As the climate changes and the demand increases for stored water utilized for irrigation, flood control, power generation, and ecosystem services, water managers have begun to move from statistical techniques towards using physically based models. To assist with the transition, a new open source framework was developed, the Spatial Modeling for Resources Framework (SMRF), to automate and simplify the most common forcing data distribution methods. SMRF is computationally efficient and can be implemented for both research and operational applications. Currently, SMRF is able to generate all of the forcing data required to run physically based snow or hydrologic models at 50-100 m resolution over regions of 500-10,000 km2, and has been successfully applied in real time and historical applications for the Boise River Basin in Idaho, USA, the Tuolumne River Basin and San Joaquin in California, USA, and Reynolds Creek Experimental Watershed in Idaho, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input data. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of physics-based snow and hydrologic models possible.
Physical Modeling of Microtubules Network
NASA Astrophysics Data System (ADS)
Allain, Pierre; Kervrann, Charles
2014-10-01
Microtubules (MT) are highly dynamic tubulin polymers that are involved in many cellular processes such as mitosis, intracellular cell organization and vesicular transport. Nevertheless, the modeling of cytoskeleton and MT dynamics based on physical properties is difficult to achieve. Using the Euler-Bernoulli beam theory, we propose to model the rigidity of microtubules on a physical basis using forces, mass and acceleration. In addition, we link microtubules growth and shrinkage to the presence of molecules (e.g. GTP-tubulin) in the cytosol. The overall model enables linking cytosol to microtubules dynamics in a constant state space thus allowing usage of data assimilation techniques.
“In vitro” Implantation Technique Based on 3D Printed Prosthetic Prototypes
NASA Astrophysics Data System (ADS)
Tarnita, D.; Boborelu, C.; Geonea, I.; Malciu, R.; Grigorie, L.; Tarnita, D. N.
2018-06-01
In this paper, Rapid Prototyping ZCorp 310 system, based on high-performance composite powder and on resin-high strength infiltration system and three-dimensional printing as a manufacturing method are used to obtain physical prototypes of orthopaedic implants and prototypes of complex functional prosthetic systems directly from the 3D CAD data. These prototypes are useful for in vitro experimental tests and measurements to optimize and obtain final physical prototypes. Using a new elbow prosthesis model prototype obtained by 3D printing, the surgical technique of implantation is established. Surgical implantation was performed on male corpse elbow joint.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann
We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less
MERINOVA: Meteorological risks as drivers of environmental innovation in agro-ecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Oger, Robert; Marlier, Catherine; Van De Vijver, Hans; Vandermeulen, Valerie; Van Huylenbroeck, Guido; Zamani, Sepideh; Curnel, Yannick; Mettepenningen, Evi
2013-04-01
The BELSPO funded project 'MERINOVA' deals with risks associated with extreme weather phenomena and with risks of biological origin such as pests and diseases. The major objectives of the proposed project are to characterise extreme meteorological events, assess the impact on Belgian agro-ecosystems, characterise their vulnerability and resilience to these events, and explore innovative adaptation options to agricultural risk management. The project comprises of five major parts that reflect the chain of risks: (i) Hazard: Assessing the likely frequency and magnitude of extreme meteorological events by means of probability density functions; (ii) Impact: Analysing the potential bio-physical and socio-economic impact of extreme weather events on agro-ecosystems in Belgium using process-based modelling techniques commensurate with the regional scale; (iii) Vulnerability: Identifying the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (iv) Risk Management: Uncovering innovative risk management and adaptation options using actor-network theory and fuzzy cognitive mapping techniques; and, (v) Communication: Communicating to research, policy and practitioner communities using web-based techniques. The different tasks of the MERINOVA project require expertise in several scientific disciplines: meteorology, statistics, spatial database management, agronomy, bio-physical impact modelling, socio-economic modelling, actor-network theory, fuzzy cognitive mapping techniques. These expertises are shared by the four scientific partners who each lead one work package. The MERINOVA project will concentrate on promoting a robust and flexible framework by demonstrating its performance across Belgian agro-ecosystems, and by ensuring its relevance to policy makers and practitioners. Impacts developed from physically based models will not only provide information on the state of the damage at any given time, but also assist in understanding the links between different factors causing damage and determining bio-physical vulnerability. Socio-economic impacts will enlarge the basis for vulnerability mapping, risk management and adaptation options. A strong expert and end-user network will be established to help disseminating and exploiting project results to meet user needs.
System equivalent model mixing
NASA Astrophysics Data System (ADS)
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
NASA Astrophysics Data System (ADS)
Bellugi, D. G.; Tennant, C.; Larsen, L.
2016-12-01
Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.
Psychophysically based model of surface gloss perception
NASA Astrophysics Data System (ADS)
Ferwerda, James A.; Pellacini, Fabio; Greenberg, Donald P.
2001-06-01
In this paper we introduce a new model of surface appearance that is based on quantitative studies of gloss perception. We use image synthesis techniques to conduct experiments that explore the relationships between the physical dimensions of glossy reflectance and the perceptual dimensions of glossy appearance. The product of these experiments is a psychophysically-based model of surface gloss, with dimensions that are both physically and perceptually meaningful and scales that reflect our sensitivity to gloss variations. We demonstrate that the model can be used to describe and control the appearance of glossy surfaces in synthesis images, allowing prediction of gloss matches and quantification of gloss differences. This work represents some initial steps toward developing psychophyscial models of the goniometric aspects of surface appearance to complement widely-used colorimetric models.
Flipping the Physical Examination: Web-Based Instruction and Live Assessment of Bedside Technique.
Williams, Dustyn E; Thornton, John W
2016-01-01
The skill of physicians teaching the physical examination skill has decreased, with newer faculty underperforming compared to their seniors. Improved methods of instruction with an emphasis on physical examinations are necessary to both improve the quality of medical education and alleviate the teaching burden of faculty physicians. We developed a curriculum that combines web-based instruction with real-life practice and features individualized feedback. This innovative medical education model should allow the physical examination to be taught and assessed in an effective manner. The model is under study at Baton Rouge General Medical Center. Our goals are to limit faculty burden, maximize student involvement as learners and evaluators, and effectively develop students' critical skills in performing bedside assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loveday, D.L.; Craggs, C.
Box-Jenkins-based multivariate stochastic modeling is carried out using data recorded from a domestic heating system. The system comprises an air-source heat pump sited in the roof space of a house, solar assistance being provided by the conventional tile roof acting as a radiation absorber. Multivariate models are presented which illustrate the time-dependent relationships between three air temperatures - at external ambient, at entry to, and at exit from, the heat pump evaporator. Using a deterministic modeling approach, physical interpretations are placed on the results of the multivariate technique. It is concluded that the multivariate Box-Jenkins approach is a suitable techniquemore » for building thermal analysis. Application to multivariate Box-Jenkins approach is a suitable technique for building thermal analysis. Application to multivariate model-based control is discussed, with particular reference to building energy management systems. It is further concluded that stochastic modeling of data drawn from a short monitoring period offers a means of retrofitting an advanced model-based control system in existing buildings, which could be used to optimize energy savings. An approach to system simulation is suggested.« less
R symmetries and a heterotic MSSM
NASA Astrophysics Data System (ADS)
Kappl, Rolf; Nilles, Hans Peter; Schmitz, Matthias
2015-02-01
We employ powerful techniques based on Hilbert and Gröbner bases to analyze particle physics models derived from string theory. Individual models are shown to have a huge landscape of vacua that differ in their phenomenological properties. We explore the (discrete) symmetries of these vacua, the new R symmetry selection rules and their consequences for moduli stabilization.
Development and comparison of projection and image space 3D nodule insertion techniques
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan
2016-04-01
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.
Ahn, Hyung Soo; DiAngelo, Denis J
2007-05-15
This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.
NASA Astrophysics Data System (ADS)
Vasina, A. V.
2017-01-01
The author of the article imparts pedagogical experience of realization of intersubject communications of school basic courses of informatics, technology and physics through research activity of students with the use of specialized programs for the development and studying of computer models of physical processes. The considered technique is based on the principles of independent scholar activity of students, intersubject communications such as educational disciplines of technology, physics and informatics; it helps to develop the research activity of students and a professional and practical orientation of education. As an example the lesson of modeling of flotation with the use of the environment "1C Physical simulator" is considered.
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.
2016-10-01
Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.
On the physical basis of a theory of human thermoregulation.
NASA Technical Reports Server (NTRS)
Iberall, A. S.; Schindler, A. M.
1973-01-01
Theoretical study of the physical factors which are responsible for thermoregulation in nude resting humans in a physical steady state. The behavior of oxidative metabolism, evaporative and convective thermal fluxes, fluid heat transfer, internal and surface temperatures, and evaporative phase transitions is studied by physiological/physical modeling techniques. The modeling is based on the theories that the body has a vital core with autothermoregulation, that the vital core contracts longitudinally, that the temperature of peripheral regions and extremities decreases towards the ambient, and that a significant portion of the evaporative heat may be lost underneath the skin. A theoretical basis is derived for a consistent modeling of steady-state thermoregulation on the basis of these theories.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Apps to promote physical activity among adults: a review and content analysis.
Middelweerd, Anouk; Mollee, Julia S; van der Wal, C Natalie; Brug, Johannes; Te Velde, Saskia J
2014-07-25
In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. On average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.
Polyenergetic known-component reconstruction without prior shape models
NASA Astrophysics Data System (ADS)
Zhang, C.; Zbijewski, W.; Zhang, X.; Xu, S.; Stayman, J. W.
2017-03-01
Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.
Physical principles for DNA tile self-assembly.
Evans, Constantine G; Winfree, Erik
2017-06-19
DNA tiles provide a promising technique for assembling structures with nanoscale resolution through self-assembly by basic interactions rather than top-down assembly of individual structures. Tile systems can be programmed to grow based on logical rules, allowing for a small number of tile types to assemble large, complex assemblies that can retain nanoscale resolution. Such algorithmic systems can even assemble different structures using the same tiles, based on inputs that seed the growth. While programming and theoretical analysis of tile self-assembly often makes use of abstract logical models of growth, experimentally implemented systems are governed by nanoscale physical processes that can lead to very different behavior, more accurately modeled by taking into account the thermodynamics and kinetics of tile attachment and detachment in solution. This review discusses the relationships between more abstract and more physically realistic tile assembly models. A central concern is how consideration of model differences enables the design of tile systems that robustly exhibit the desired abstract behavior in realistic physical models and in experimental implementations. Conversely, we identify situations where self-assembly in abstract models can not be well-approximated by physically realistic models, putting constraints on physical relevance of the abstract models. To facilitate the discussion, we introduce a unified model of tile self-assembly that clarifies the relationships between several well-studied models in the literature. Throughout, we highlight open questions regarding the physical principles for DNA tile self-assembly.
Design of high-fidelity haptic display for one-dimensional force reflection applications
NASA Astrophysics Data System (ADS)
Gillespie, Brent; Rosenberg, Louis B.
1995-12-01
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
Control-based continuation: Bifurcation and stability analysis for physical experiments
NASA Astrophysics Data System (ADS)
Barton, David A. W.
2017-02-01
Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physical experiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques
NASA Astrophysics Data System (ADS)
Spencer, E.; Rao, A.; Horton, W.; Mays, L.
2008-12-01
ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201
Applicability of three-dimensional imaging techniques in fetal medicine*
Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson
2016-01-01
Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
Endoscopic skull base training using 3D printed models with pre-existing pathology.
Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes
2015-03-01
Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
Exploring New Pathways in Precipitation Assimilation
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara Q.
2004-01-01
Precipitation assimilation poses a special challenge in that the forward model for rain in a global forecast system is based on parameterized physics, which can have large systematic errors that must be rectified to use precipitation data effectively within a standard statistical analysis framework. We examine some key issues in precipitation assimilation and describe several exploratory studies in assimilating rainfall and latent heating information in NASA's global data assimilation systems using the forecast model as a weak constraint. We present results from two research activities. The first is the assimilation of surface rainfall data using a time-continuous variational assimilation based on a column model of the full moist physics. The second is the assimilation of convective and stratiform latent heating retrievals from microwave sensors using a variational technique with physical parameters in the moist physics schemes as a control variable. We will show the impact of assimilating these data on analyses and forecasts. Among the lessons learned are (1) that the time-continuous application of moisture/temperature tendency corrections to mitigate model deficiencies offers an effective strategy for assimilating precipitation information, and (2) that the model prognostic variables must be allowed to directly respond to an improved rain and latent heating field within an analysis cycle to reap the full benefit of assimilating precipitation information. of microwave radiances versus retrieval information in raining areas, and initial efforts in developing ensemble techniques such as Kalman filter/smoother for precipitation assimilation. Looking to the future, we discuss new research directions including the assimilation
Physically based modeling in catchment hydrology at 50: Survey and outlook
NASA Astrophysics Data System (ADS)
Paniconi, Claudio; Putti, Mario
2015-09-01
Integrated, process-based numerical models in hydrology are rapidly evolving, spurred by novel theories in mathematical physics, advances in computational methods, insights from laboratory and field experiments, and the need to better understand and predict the potential impacts of population, land use, and climate change on our water resources. At the catchment scale, these simulation models are commonly based on conservation principles for surface and subsurface water flow and solute transport (e.g., the Richards, shallow water, and advection-dispersion equations), and they require robust numerical techniques for their resolution. Traditional (and still open) challenges in developing reliable and efficient models are associated with heterogeneity and variability in parameters and state variables; nonlinearities and scale effects in process dynamics; and complex or poorly known boundary conditions and initial system states. As catchment modeling enters a highly interdisciplinary era, new challenges arise from the need to maintain physical and numerical consistency in the description of multiple processes that interact over a range of scales and across different compartments of an overall system. This paper first gives an historical overview (past 50 years) of some of the key developments in physically based hydrological modeling, emphasizing how the interplay between theory, experiments, and modeling has contributed to advancing the state of the art. The second part of the paper examines some outstanding problems in integrated catchment modeling from the perspective of recent developments in mathematical and computational science.
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
Apps to promote physical activity among adults: a review and content analysis
2014-01-01
Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G
2011-08-01
Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
A Modal Approach to Compact MIMO Antenna Design
NASA Astrophysics Data System (ADS)
Yang, Binbin
MIMO (Multiple-Input Multiple-Output) technology offers new possibilities for wireless communication through transmission over multiple spatial channels, and enables linear increases in spectral efficiency as the number of the transmitting and receiving antennas increases. However, the physical implementation of such systems in compact devices encounters many physical constraints mainly from the design of multi-antennas. First, an antenna's bandwidth decreases dramatically as its electrical size reduces, a fact known as antenna Q limit; secondly, multiple antennas closely spaced tend to couple with each other, undermining MIMO performance. Though different MIMO antenna designs have been proposed in the literature, there is still a lack of a systematic design methodology and knowledge of performance limits. In this dissertation, we employ characteristic mode theory (CMT) as a powerful tool for MIMO antenna analysis and design. CMT allows us to examine each physical mode of the antenna aperture, and to access its many physical parameters without even exciting the antenna. For the first time, we propose efficient circuit models for MIMO antennas of arbitrary geometry using this modal decomposition technique. Those circuit models demonstrate the powerful physical insight of CMT for MIMO antenna modeling, and simplify MIMO antenna design problem to just the design of specific antenna structural modes and a modal feed network, making possible the separate design of antenna aperture and feeds. We therefore develop a feed-independent shape synthesis technique for optimization of broadband multi-mode apertures. Combining the shape synthesis and circuit modeling techniques for MIMO antennas, we propose a shape-first feed-next design methodology for MIMO antennas, and designed and fabricated two planar MIMO antennas, each occupying an aperture much smaller than the regular size of lambda/2 x lambda/2. Facilitated by the newly developed source formulation for antenna stored energy and recently reported work on antenna Q factor minimization, we extend the minimum Q limit to antennas of arbitrary geometry, and show that given an antenna aperture, any antenna design based on its substructure will result into minimum Q factors larger than or equal to that of the complete structure. This limit is much tighter than Chu's limit based on spherical modes, and applies to antennas of arbitrary geometry. Finally, considering the almost inevitable presence of mutual coupling effects within compact multiport antennas, we develop new decoupling networks (DN) and decoupling network synthesis techniques. An information-theoretic metric, information mismatch loss (Gammainfo), is defined for DN characterization. Based on this metric, the optimization of decoupling networks for broadband system performance is conducted, which demonstrates the limitation of the single-frequency decoupling techniques and room for improvement.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
NASA Astrophysics Data System (ADS)
Arévalo, Germán. V.; Hincapié, Roberto C.; Sierra, Javier E.
2015-09-01
UDWDM PON is a leading technology oriented to provide ultra-high bandwidth to final users while profiting the physical channels' capability. One of the main drawbacks of UDWDM technique is the fact that the nonlinear effects, like FWM, become stronger due to the close spectral proximity among channels. This work proposes a model for the optimal deployment of this type of networks taking into account the fiber length limitations imposed by physical restrictions related with the fiber's data transmission as well as the users' asymmetric distribution in a provided region. The proposed model employs the data transmission related effects in UDWDM PON as restrictions in the optimization problem and also considers the user's asymmetric clustering and the subdivision of the users region though a Voronoi geometric partition technique. Here it is considered de Voronoi dual graph, it is the Delaunay Triangulation, as the planar graph for resolving the problem related with the minimum weight of the fiber links.
Model-Based Control using Model and Mechanization Fusion Techniques for Image-Aided Navigation
2009-03-01
Magnet Motors . Magna Physics Publishing, Hillsboro, OH, 1994. 7. Houwu Bai, Xubo Song, Eric Wan and Andriy Myronenko. “Vision-only Navi- gation and...filter”. Proceedings of the Recent Advances in Space Technologies (RAST). Nov 2003. 6. Hendershot, J.R. and Tje Miller. Design of Brushless Permanent
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
High-Accurate, Physics-Based Wake Simulation Techniques
2015-01-27
to accepting the use of computational fluid dynamics models to supplement some of the research. The scientists Lewellen and Lewellen [13] in 1996...resolved in today’s climate es- pecially concerning CFD and experimental. Multiple programs have been established such as the Aircraft Vortex Spacing ...step the entire matrix is solved at once creating inconsistencies when applied to the physics of a fluid mechanics problem where information changes
Fast Inference of Deep Neural Networks in FPGAs for Particle Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duarte, Javier; Han, Song; Harris, Philip
Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which wouldmore » enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.« less
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
Cohen, Noy; Menzel, Andreas; deBotton, Gal
2016-02-01
Owing to the increasing number of industrial applications of electro-active polymers (EAPs), there is a growing need for electromechanical models which accurately capture their behaviour. To this end, we compare the predicted behaviour of EAPs undergoing homogeneous deformations according to three electromechanical models. The first model is a phenomenological continuum-based model composed of the mechanical Gent model and a linear relationship between the electric field and the polarization. The electrical and the mechanical responses according to the second model are based on the physical structure of the polymer chain network. The third model incorporates a neo-Hookean mechanical response and a physically motivated microstructurally based long-chains model for the electrical behaviour. In the microstructural-motivated models, the integration from the microscopic to the macroscopic levels is accomplished by the micro-sphere technique. Four types of homogeneous boundary conditions are considered and the behaviours determined according to the three models are compared. For the microstructurally motivated models, these analyses are performed and compared with the widely used phenomenological model for the first time. Some of the aspects revealed in this investigation, such as the dependence of the intensity of the polarization field on the deformation, highlight the need for an in-depth investigation of the relationships between the structure and the behaviours of the EAPs at the microscopic level and their overall macroscopic response.
Operational Space Weather Models: Trials, Tribulations and Rewards
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Thompson, D. C.; Zhu, L.
2009-12-01
There are many empirical, physics-based, and data assimilation models that can probably be used for space weather applications and the models cover the entire domain from the surface of the Sun to the Earth’s surface. At Utah State University we developed two physics-based data assimilation models of the terrestrial ionosphere as part of a program called Global Assimilation of Ionospheric Measurements (GAIM). One of the data assimilation models is now in operational use at the Air Force Weather Agency (AFWA) in Omaha, Nebraska. This model is a Gauss-Markov Kalman Filter (GAIM-GM) model, and it uses a physics-based model of the ionosphere and a Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) measurements. The physics-based model is the Ionosphere Forecast Model (IFM), which is global and covers the E-region, F-region, and topside ionosphere from 90 to 1400 km. It takes account of five ion species (NO+, O2+, N2+, O+, H+), but the main output of the model is a 3-dimensional electron density distribution at user specified times. The second data assimilation model uses a physics-based Ionosphere-Plasmasphere Model (IPM) and an ensemble Kalman filter technique as a basis for assimilating a diverse set of real-time (or near real-time) measurements. This Full Physics model (GAIM-FP) is global, covers the altitude range from 90 to 30,000 km, includes six ions (NO+, O2+, N2+, O+, H+, He+), and calculates the self-consistent ionospheric drivers (electric fields and neutral winds). The GAIM-FP model is scheduled for delivery in 2012. Both of these GAIM models assimilate bottom-side Ne profiles from a variable number of ionosondes, slant TEC from a variable number of ground GPS/TEC stations, in situ Ne from four DMSP satellites, line-of-sight UV emissions measured by satellites, and occultation data. Quality control algorithms for all of the data types are provided as an integral part of the GAIM models and these models take account of latent data (up to 3 hours). The trials, tribulations and rewards of constructing and maintaining operational data assimilation models will be discussed.
NASA Astrophysics Data System (ADS)
Ulmer, S.; Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.
2018-03-01
The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
Ulmer, S; Mooser, A; Nagahama, H; Sellner, S; Smorra, C
2018-03-28
The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge-parity-time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Authors.
Mooser, A.; Nagahama, H.; Sellner, S.; Smorra, C.
2018-01-01
The BASE collaboration investigates the fundamental properties of protons and antiprotons, such as charge-to-mass ratios and magnetic moments, using advanced cryogenic Penning trap systems. In recent years, we performed the most precise measurement of the magnetic moments of both the proton and the antiproton, and conducted the most precise comparison of the proton-to-antiproton charge-to-mass ratio. In addition, we have set the most stringent constraint on directly measured antiproton lifetime, based on a unique reservoir trap technique. Our matter/antimatter comparison experiments provide stringent tests of the fundamental charge–parity–time invariance, which is one of the fundamental symmetries of the standard model of particle physics. This article reviews the recent achievements of BASE and gives an outlook to our physics programme in the ELENA era. This article is part of the Theo Murphy meeting issue ‘Antiproton physics in the ELENA era’. PMID:29459414
Graham, John; Zheng, Liya; Gonzalez, Cleotilde
2006-06-01
We developed a technique to observe and characterize a novice real-time-strategy (RTS) player's mental model as it shifts with experience. We then tested this technique using an off-the-shelf RTS game, EA Games Generals. Norman defined mental models as, "an internal representation of a target system that provides predictive and explanatory power to the operator." In the case of RTS games, the operator is the player and the target system is expressed by the relationships within the game. We studied five novice participants in laboratory-controlled conditions playing a RTS game. They played Command and Conquer Generals for 2 h per day over the course of 5 days. A mental model analysis was generated using player dissimilarity-ratings of the game's artificial intelligence (AI) agents analyzed using multidimensional scaling (MDS) statistical methods. We hypothesized that novices would begin with an impoverished model based on the visible physical characteristics of the game system. As they gained experience and insight, their mental models would shift and accommodate the functional characteristics of the AI agents. We found that all five of the novice participants began with the predicted physical-based mental model. However, while their models did qualitatively shift with experience, they did not necessarily change to the predicted functional-based model. This research presents an opportunity for the design of games that are guided by shifts in a player's mental model as opposed to the typical progression through successive performance levels.
Guo, Jianxin; Kumar, Sandeep; Chipley, Mark; Marcq, Olivier; Gupta, Devansh; Jin, Zhaowei; Tomar, Dheeraj S; Swabowski, Cecily; Smith, Jacquelynn; Starkey, Jason A; Singh, Satish K
2016-03-16
The impact of drug loading and distribution on higher order structure and physical stability of an interchain cysteine-based antibody drug conjugate (ADC) has been studied. An IgG1 mAb was conjugated with a cytotoxic auristatin payload following the reduction of interchain disulfides. The 2-D LC-MS analysis shows that there is a preference for certain isomers within the various drug to antibody ratios (DARs). The physical stability of the unconjugated monoclonal antibody, the ADC, and isolated conjugated species with specific DAR, were compared using calorimetric, thermal, chemical denaturation and molecular modeling techniques, as well as techniques to assess hydrophobicity. The DAR was determined to have a significant impact on the biophysical properties and stability of the ADC. The CH2 domain was significantly perturbed in the DAR6 species, which was attributable to quaternary structural changes as assessed by molecular modeling. At accelerated storage temperatures, the DAR6 rapidly forms higher molecular mass species, whereas the DAR2 and the unconjugated mAb were largely stable. Chemical denaturation study indicates that DAR6 may form multimers while DAR2 and DAR4 primarily exist in monomeric forms in solution at ambient conditions. The physical state differences were correlated with a dramatic increase in the hydrophobicity and a reduction in the surface tension of the DAR6 compared to lower DAR species. Molecular modeling of the various DAR species and their conformers demonstrates that the auristatin-based linker payload directly contributes to the hydrophobicity of the ADC molecule. Higher order structural characterization provides insight into the impact of conjugation on the conformational and colloidal factors that determine the physical stability of cysteine-based ADCs, with implications for process and formulation development.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Satellite-enhanced dynamical downscaling for the analysis of extreme events
NASA Astrophysics Data System (ADS)
Nunes, Ana M. B.
2016-09-01
The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.
Reactive solute transport in streams: 1. Development of an equilibrium- based model
Runkel, Robert L.; Bencala, Kenneth E.; Broshears, Robert E.; Chapra, Steven C.
1996-01-01
An equilibrium-based solute transport model is developed for the simulation of trace metal fate and transport in streams. The model is formed by coupling a solute transport model with a chemical equilibrium submodel based on MINTEQ. The solute transport model considers the physical processes of advection, dispersion, lateral inflow, and transient storage, while the equilibrium submodel considers the speciation and complexation of aqueous species, precipitation/dissolution and sorption. Within the model, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (water-borne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach.
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Novel Plasmonic and Hyberbolic Optical Materials for Control of Quantum Nanoemitters
2016-12-08
properties, metal ion implantation techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. 15. SUBJECT TERMS nanotechnology 16...techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. During the course of this project we studied plasmonic
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
NASA Technical Reports Server (NTRS)
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
An architecture for the development of real-time fault diagnosis systems using model-based reasoning
NASA Technical Reports Server (NTRS)
Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday
1992-01-01
Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.
NASA Technical Reports Server (NTRS)
Macneice, Peter
1995-01-01
This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-06-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-01-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
NASA Astrophysics Data System (ADS)
Ahmadian, A.; Ismail, F.; Salahshour, S.; Baleanu, D.; Ghaemi, F.
2017-12-01
The analysis of the behaviors of physical phenomena is important to discover significant features of the character and the structure of mathematical models. Frequently the unknown parameters involve in the models are assumed to be unvarying over time. In reality, some of them are uncertain and implicitly depend on several factors. In this study, to consider such uncertainty in variables of the models, they are characterized based on the fuzzy notion. We propose here a new model based on fractional calculus to deal with the Kelvin-Voigt (KV) equation and non-Newtonian fluid behavior model with fuzzy parameters. A new and accurate numerical algorithm using a spectral tau technique based on the generalized fractional Legendre polynomials (GFLPs) is developed to solve those problems under uncertainty. Numerical simulations are carried out and the analysis of the results highlights the significant features of the new technique in comparison with the previous findings. A detailed error analysis is also carried out and discussed.
NASA Astrophysics Data System (ADS)
Gerszewski, Daniel James
Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present several enhancements to model-reduced fluid simulation that allow improved simulation bases and two-way solid-fluid coupling. Specifically, we present a basis enrichment scheme that allows us to combine data-driven or artistically derived bases with more general analytic bases derived from Laplacian Eigenfunctions. Additionally, we handle two-way solid-fluid coupling in a time-splitting fashion---we alternately timestep the fluid and rigid body simulators, while taking into account the effects of the fluid on the rigid bodies and vice versa. We employ the vortex panel method to handle solid-fluid coupling and use dynamic pressure to compute the effect of the fluid on rigid bodies. Taken together, these contributions have advanced the state-of-the art in physics-based animation and are practical enough to be used in production pipelines.
Synthesis of Speaker Facial Movement to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.
1994-01-01
A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.
Integration of Dynamic Models in Range Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge; Thirumalainambi, Rajkumar
2004-01-01
This work addresses the various model interactions in real-time to make an efficient internet based decision making tool for Shuttle launch. The decision making tool depends on the launch commit criteria coupled with physical models. Dynamic interaction between a wide variety of simulation applications and techniques, embedded algorithms, and data visualizations are needed to exploit the full potential of modeling and simulation. This paper also discusses in depth details of web based 3-D graphics and applications to range safety. The advantages of this dynamic model integration are secure accessibility and distribution of real time information to other NASA centers.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Starspot detection and properties
NASA Astrophysics Data System (ADS)
Savanov, I. S.
2013-07-01
I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.
NASA Astrophysics Data System (ADS)
Mali, V. K.; Kuiry, S. N.
2015-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
NASA Astrophysics Data System (ADS)
Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.
2014-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
Solving a Higgs optimization problem with quantum annealing for machine learning.
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-18
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
Solving a Higgs optimization problem with quantum annealing for machine learning
NASA Astrophysics Data System (ADS)
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-01
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
NASA Astrophysics Data System (ADS)
Sunarti, T.; Wasis; Madlazim; Suyidno; Prahani, B. K.
2018-03-01
In the previous research, learning material based Construction, Production, and Implementation (CPI) model has been developed to improve scientific literacy and positive attitude toward science for pre-service physics teacher. CPI model has 4 phases, included: 1) Motivation; 2) Construction (Cycle I); 3) Production (Cycle II); and 4) Evaluation. This research is aimed to analyze the effectiveness of CPI model towards the improvement Positive Attitude toward Science (PATS) for pre-service physics teacher. This research used one group pre-test and post-test design on 160 pre-service physics teacher divided into 4 groups at Lambung Mangkurat University and Surabaya State University (Indonesia), academic year 2016/2017. Data collection was conducted through questioner, observation, and interview. Positive attitude toward science for pre-service physics teacher measurement were conducted through Positive Attitude toward Science Evaluation Sheet (PATSES). The data analysis technique was done by using Wilcoxon test and n-gain. The results showed that there was a significant increase in positive attitude toward science for pre-service physics teacher at α = 5%, with n-gain average of high category. Thus, the CPI model is effective for improving positive attitude toward science for pre-service physics teacher.
Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.
2011-01-01
We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788
Modification of Gaussian mixture models for data classification in high energy physics
NASA Astrophysics Data System (ADS)
Štěpánek, Michal; Franc, Jiří; Kůs, Václav
2015-01-01
In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).
Physical models of collective cell motility: from cell to tissue
NASA Astrophysics Data System (ADS)
Camley, B. A.; Rappel, W.-J.
2017-03-01
In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.
Advection modes by optimal mass transfer
NASA Astrophysics Data System (ADS)
Iollo, Angelo; Lombardi, Damiano
2014-02-01
Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.
From Particle Physics to Medical Applications
NASA Astrophysics Data System (ADS)
Dosanjh, Manjit
2017-06-01
CERN is the world's largest particle physics research laboratory. Since it was established in 1954, it has made an outstanding contribution to our understanding of the fundamental particles and their interactions, and also to the technologies needed to analyse their properties and behaviour. The experimental challenges have pushed the performance of particle accelerators and detectors to the limits of our technical capabilities, and these groundbreaking technologies can also have a significant impact in applications beyond particle physics. In particular, the detectors developed for particle physics have led to improved techniques for medical imaging, while accelerator technologies lie at the heart of the irradiation methods that are widely used for treating cancer. Indeed, many important diagnostic and therapeutic techniques used by healthcare professionals are based either on basic physics principles or the technologies developed to carry out physics research. Ever since the discovery of x-rays by Roentgen in 1895, physics has been instrumental in the development of technologies in the biomedical domain, including the use of ionizing radiation for medical imaging and therapy. Some key examples that are explored in detail in this book include scanners based on positron emission tomography, as well as radiation therapy for cancer treatment. Even the collaborative model of particle physics is proving to be effective in catalysing multidisciplinary research for medical applications, ensuring that pioneering physics research is exploited for the benefit of all.
Mathematical and computational modelling of skin biophysics: a review
2017-01-01
The objective of this paper is to provide a review on some aspects of the mathematical and computational modelling of skin biophysics, with special focus on constitutive theories based on nonlinear continuum mechanics from elasticity, through anelasticity, including growth, to thermoelasticity. Microstructural and phenomenological approaches combining imaging techniques are also discussed. Finally, recent research applications on skin wrinkles will be presented to highlight the potential of physics-based modelling of skin in tackling global challenges such as ageing of the population and the associated skin degradation, diseases and traumas. PMID:28804267
Mathematical and computational modelling of skin biophysics: a review
NASA Astrophysics Data System (ADS)
Limbert, Georges
2017-07-01
The objective of this paper is to provide a review on some aspects of the mathematical and computational modelling of skin biophysics, with special focus on constitutive theories based on nonlinear continuum mechanics from elasticity, through anelasticity, including growth, to thermoelasticity. Microstructural and phenomenological approaches combining imaging techniques are also discussed. Finally, recent research applications on skin wrinkles will be presented to highlight the potential of physics-based modelling of skin in tackling global challenges such as ageing of the population and the associated skin degradation, diseases and traumas.
Wilson, Lydia J; Newhauser, Wayne D
2015-01-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Using HEC-HMS: Application to Karkheh river basin
USDA-ARS?s Scientific Manuscript database
This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...
TGfU Pet-Agogy: Old Dogs, New Tricks and Puppy School
ERIC Educational Resources Information Center
Butler, Joy I.
2005-01-01
How do we encourage teachers to adopt Teaching Games for Understanding (TGfU) so that it becomes part of mainstream practice in physical education and community-based sports programmes worldwide? Why do some teachers adopt a TGfU instructional model and others stick to a technique-based approach? What happens to PETE students when they attempt to…
Neuroimaging Techniques: a Conceptual Overview of Physical Principles, Contribution and History
NASA Astrophysics Data System (ADS)
Minati, Ludovico
2006-06-01
This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Given the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.
NASA Astrophysics Data System (ADS)
Havens, Scott; Marks, Danny; Kormos, Patrick; Hedrick, Andrew
2017-12-01
In the Western US and many mountainous regions of the world, critical water resources and climate conditions are difficult to monitor because the observation network is generally very sparse. The critical resource from the mountain snowpack is water flowing into streams and reservoirs that will provide for irrigation, flood control, power generation, and ecosystem services. Water supply forecasting in a rapidly changing climate has become increasingly difficult because of non-stationary conditions. In response, operational water supply managers have begun to move from statistical techniques towards the use of physically based models. As we begin to transition physically based models from research to operational use, we must address the most difficult and time-consuming aspect of model initiation: the need for robust methods to develop and distribute the input forcing data. In this paper, we present a new open source framework, the Spatial Modeling for Resources Framework (SMRF), which automates and simplifies the common forcing data distribution methods. It is computationally efficient and can be implemented for both research and operational applications. We present an example of how SMRF is able to generate all of the forcing data required to a run physically based snow model at 50-100 m resolution over regions of 1000-7000 km2. The approach has been successfully applied in real time and historical applications for both the Boise River Basin in Idaho, USA and the Tuolumne River Basin in California, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of a physically based snow model possible.
NASA Astrophysics Data System (ADS)
Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.
2014-09-01
This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.
NASA Astrophysics Data System (ADS)
Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.
2012-04-01
Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems in modelling and possible solution strategies and finally, we address open questions.
NASA Astrophysics Data System (ADS)
Paloma, Cynthia S.
The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.
Theoretical Calculations of Atomic Data for Spectroscopy
NASA Technical Reports Server (NTRS)
Bautista, Manuel A.
2000-01-01
Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
NASA Astrophysics Data System (ADS)
Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong
2017-08-01
Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.
Modeling and simulation of dust behaviors behind a moving vehicle
NASA Astrophysics Data System (ADS)
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.
Zhdanov,; Michael, S [Salt Lake City, UT
2008-01-29
Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.
Hardware-in-the-Loop Modeling and Simulation Methods for Daylight Systems in Buildings
NASA Astrophysics Data System (ADS)
Mead, Alex Robert
This dissertation introduces hardware-in-the-loop modeling and simulation techniques to the daylighting community, with specific application to complex fenestration systems. No such application of this class of techniques, optimally combining mathematical-modeling and physical-modeling experimentation, is known to the author previously in the literature. Daylighting systems in buildings have a large impact on both the energy usage of a building as well as the occupant experience within a space. As such, a renewed interest has been placed on designing and constructing buildings with an emphasis on daylighting in recent times as part of the "green movement.''. Within daylighting systems, a specific subclass of building envelope is receiving much attention: complex fenestration systems (CFSs). CFSs are unique as compared to regular fenestration systems (e.g. glazing) in the regard that they allow for non-specular transmission of daylight into a space. This non-specular nature can be leveraged by designers to "optimize'' the times of the day and the days of the year that daylight enters a space. Examples of CFSs include: Venetian blinds, woven fabric shades, and prismatic window coatings. In order to leverage the non-specular transmission properties of CFSs, however, engineering analysis techniques capable of faithfully representing the physics of these systems are needed. Traditionally, the analysis techniques available to the daylighting community fall broadly into three classes: simplified techniques, mathematical-modeling and simulation, and physical-modeling and experimentation. Simplified techniques use "rules-of-thumb'' heuristics to provide insights for simple daylighting systems. Mathematical-modeling and simulation use complex numerical models to provide more detailed insights into system performance. Finally, physical-models can be instrumented and excited using artificial and natural light sources to provide performance insight into a daylighting system. Each class of techniques, broadly speaking however, has advantages and disadvantages with respect to the cost of execution (e.g. money, time, expertise) and the fidelity of the provided insight into the performance of the daylighting system. This varying tradeoff of cost and insight between the techniques determines which techniques are employed for which projects. Daylighting systems with CFS components, however, when considered for simulation with respect to these traditional technique classes, defy high fidelity analysis. Simplified techniques are clearly not applicable. Mathematical-models must have great complexity in order to capture the non-specular transmission accurately, which greatly limit their applicability. This leaves physical modeling, the most costly, as the preferred method for CFS. While mathematical-modeling and simulation methods do exist, they are in general costly and and still approximations of the underlying CFS behavior. Meaning in fact, measurements of CFSs are currently the only practical method to capture the behavior of CFSs. Traditional measurements of CFSs transmission and reflection properties are conducted using an instrument called a goniophotometer and produce a measurement in the form of a Bidirectional Scatter Distribution Function (BSDF) based on the Klems Basis. This measurement must be executed for each possible state of the CFS, hence only a subset of the possible behaviors can be captured for CFSs with continuously varying configurations. In the current era of rapid prototyping (e.g. 3D printing) and automated control of buildings including daylighting systems, a new analysis technique is needed which can faithfully represent these CFSs which are being designed and constructed at an increasing rate. Hardware-in-the-loop modeling and simulation is a perfect fit to the current need of analyzing daylighting systems with CFSs. In the proposed hardware-in-the-loop modeling and simulation approach of this dissertation, physical-models of real CFSs are excited using either natural or artificial light. The exiting luminance distribution from these CFSs is measured and used as inputs to a Radiance mathematical-model of the interior of the space, which is proposed to be lit by the CFS containing daylighting system. Hence, the components of the total daylighting and building system which are not mathematically-modeled well, the CFS, are physically excited and measured, while the components which are modeled properly, namely the interior building space, are mathematically-modeled. In order to excite and measure CFSs behavior, a novel parallel goniophotometer, referred to as the CUBE 2.0, is developed in this dissertation. The CUBE 2.0 measures the input illuminance distribution and the output luminance distribution with respect to a CFS under test. Further, the process is fully automated allowing for deployable experiments on proposed building sites, as well as in laboratory based experiments. In this dissertation, three CFSs, two commercially available and one novel--Twitchell's Textilene 80 Black, Twitchell's Shade View Ebony, and Translucent Concrete Panels (TCP)--are simulated on the CUBE 2.0 system for daylong deployments at one minute time steps. These CFSs are assumed to be placed in the glazing space within the Reference Office Radiance model, for which horizontal illuminance on a work plane of 0.8 m height is calculated for each time step. While Shade View Ebony and TCPs are unmeasured CFSs with respect to BSDF, Textilene 80 Black has been previously measured. As such a validation of the CUBE 2.0 using the goniophotometer measured BSDF is presented, with measurement errors of the horizontal illuminance between +3% and -10%. These error levels are considered to be valid within experimental daylighting investigations. Non-validated results are also presented in full for both Shade View Ebony as well as TCP. Concluding remarks and future directions for HWiL simulation close the dissertation.
Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development
ERIC Educational Resources Information Center
Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.
2010-01-01
This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…
Micromechanics Modeling of Fracture in Nanocrystalline Metals
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Piascik, R. S.; Raju, I. S.; Harris, C. E.
2002-01-01
Nanocrystalline metals have very high theoretical strength, but suffer from a lack of ductility and toughness. Therefore, it is critical to understand the mechanisms of deformation and fracture of these materials before their full potential can be achieved. Because classical fracture mechanics is based on the comparison of computed fracture parameters, such as stress intlmsity factors, to their empirically determined critical values, it does not adequately describe the fundamental physics of fracture required to predict the behavior of nanocrystalline metals. Thus, micromechanics-based techniques must be considered to quanti@ the physical processes of deformation and fracture within nanocrystalline metals. This paper discusses hndamental physicsbased modeling strategies that may be useful for the prediction Iof deformation, crack formation and crack growth within nanocrystalline metals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my
This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, anmore » advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave velocity of rock.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Safety models incorporating graph theory based transit indicators.
Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M
2013-01-01
There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lagrangian analysis of multiscale particulate flows with the particle finite element method
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Latorre, Salvador; Casas, Guillermo; Rossi, Riccardo; Rojek, Jerzy
2014-05-01
We present a Lagrangian numerical technique for the analysis of flows incorporating physical particles of different sizes. The numerical approach is based on the particle finite element method (PFEM) which blends concepts from particle-based techniques and the FEM. The basis of the Lagrangian formulation for particulate flows and the procedure for modelling the motion of small and large particles that are submerged in the fluid are described in detail. The numerical technique for analysis of this type of multiscale particulate flows using a stabilized mixed velocity-pressure formulation and the PFEM is also presented. Examples of application of the PFEM to several particulate flows problems are given.
Spindle speed variation technique in turning operations: Modeling and real implementation
NASA Astrophysics Data System (ADS)
Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.
2016-11-01
Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.
Imaging plasmas at the Earth and other planets
NASA Astrophysics Data System (ADS)
Mitchell, D. G.
2006-05-01
The field of space physics, both at Earth and at other planets, was for decades a science based on local observations. By stitching together measurements of plasmas and fields from multiple locations either simultaneously or for similar conditions over time, and by comparing those measurements against models of the physical systems, great progress was made in understanding the physics of Earth and planetary magnetospheres, ionospheres, and their interactions with the solar wind. However, the pictures of the magnetospheres were typically statistical, and the large-scale global models were poorly constrained by observation. This situation changed dramatically with global auroral imaging, which provided snapshots and movies of the effects of field aligned currents and particle precipitation over the entire auroral oval during quiet and disturbed times. And with the advent of global energetic neutral atom (ENA) and extreme ultraviolet (EUV) imaging, global constraints have similarly been added to ring current and plasmaspheric models, respectively. Such global constraints on global models are very useful for validating the physics represented in those models, physics of energy and momentum transport, electric and magnetic field distribution, and magnetosphere-ionosphere coupling. These techniques are also proving valuable at other planets. For example with Hubble Space Telescope imaging of Jupiter and Saturn auroras, and ENA imaging at Jupiter and Saturn, we are gaining new insights into the magnetic fields, gas-plasma interactions, magnetospheric dynamics, and magnetosphere-ionosphere coupling at the giant planets. These techniques, especially ENA and EUV imaging, rely on very recent and evolving technological capabilities. And because ENA and EUV techniques apply to optically thin media, interpretation of their measurements require sophisticated inversion procedures, which are still under development. We will discuss the directions new developments in imaging are taking, what technologies and mission scenarios might best take advantage of them, and how our understanding of the Earth's and other planets' plasma environments may benefit from such advancements.
A response surface methodology based damage identification technique
NASA Astrophysics Data System (ADS)
Fang, S. E.; Perera, R.
2009-06-01
Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.
NASA Astrophysics Data System (ADS)
Falconer, R.; Radoslow, P.; Grinev, D.; Otten, W.
2009-04-01
Fungi play a pivital role in soil ecosystems contributing to plant productivity. The underlying soil physical and biological processes responsible for community dynamics are interrelated and, at present, poorly understood. If these complex processes can be understood then this knowledge can be managed with an aim to providing more sustainable agriculture. Our understanding of microbial dynamics in soil has long been hampered by a lack of a theoretical framework and difficulties in observation and quantification. We will demonstrate how the spatial and temporal dynamics of fungi in soil can be understood by linking mathematical modelling with novel techniques that visualise the complex structure of the soil. The combination of these techniques and mathematical models opens up new possibilities to understand how the physical structure of soil affects fungal colony dynamics and also how fungal dynamics affect soil structure. We will quantify, using X ray tomography, soil structure for a range of artificially prepared microcosms. We characterise the soil structures using soil metrics such as porosity, fractal dimension, and the connectivity of the pore volume. Furthermore we will use the individual based fungal colony growth model of Falconer et al. 2005, which is based on the physiological processes of fungi, to assess the effect of soil structure on microbial dynamics by qualifying biomass abundances and distributions. We demonstrate how soil structure can critically affect fungal species interactions with consequences for biological control and fungal biodiversity.
Automatic determination of fault effects on aircraft functionality
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1989-01-01
The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.
2013-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.
Experimental confirmation of a PDE-based approach to design of feedback controls
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Silcox, R. J.; Metcalf, Vern L.
1995-01-01
Issues regarding the experimental implementation of partial differential equation based controllers are discussed in this work. While the motivating application involves the reduction of vibration levels for a circular plate through excitation of surface-mounted piezoceramic patches, the general techniques described here will extend to a variety of applications. The initial step is the development of a PDE model which accurately captures the physics of the underlying process. This model is then discretized to yield a vector-valued initial value problem. Optimal control theory is used to determine continuous-time voltages to the patches, and the approximations needed to facilitate discrete time implementation are addressed. Finally, experimental results demonstrating the control of both transient and steady state vibrations through these techniques are presented.
Experimental analysis of computer system dependability
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar, K.; Tang, Dong
1993-01-01
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.
García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María
2015-01-01
The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.
Chinese research on shock physics. Studies in Chinese Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, N.H.
1992-07-01
Shock wave research encompasses many different disciplines. This monograph limits the scope to Chinese research on solids and is based on available open literature sources. For the purpose of this monograph, the papers are divided into seven groups, i.e. review and tutorial; equations of state; phase transitions; geological materials; modeling and simulations; experimental techniques; and mechanical properties. The largest group of papers is experimental techniques and numbers 22, or about 40% of the total sources.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Hybrid modeling of nitrate fate in large catchments using fuzzy-rules
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Haberlandt, Uwe
2010-05-01
Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate simulation in large catchments, in particular concerning decision support. Fuzzy-rule based modeling enables the generation of simple, efficient, easily understandable models with nevertheless satisfactory accuracy for problems of decision support. The chosen approach encompasses a hybrid metamodeling, which includes the generation of fuzzy-rules with data originating from physically based models as well as a coupling with a physically based water balance model. For the generation of the needed training data and also as coupled water balance model the ecohydrological model SWAT is employed. The conceptual model divides the nitrate pathway into three parts. The first fuzzy-module calculates nitrate leaching with the percolating water from soil surface to groundwater, the second module simulates groundwater passage, and the final module replaces the in-stream processes. The aim of this modularization is to create flexibility for using each of the modules on its own, for changing or completely replacing it. For fuzzy-rule based modeling this can explicitly mean that the re-training of one of the modules with newly available data will be possible without problem, while the module assembly does not have to be modified. Apart from the concept of hybrid metamodeling first results are presented for the fuzzy-module for nitrate passage through the unsaturated zone.
Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.
Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O
2012-06-07
Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.
NASA Astrophysics Data System (ADS)
Kavcar, Nevzat; Korkmaz, Cihan
2017-02-01
Purpose of this work is to determine the physics teacher candidates' views on Physics 10 textbook' content and general properties suitable to the 2013 Secondary School Physics Curriculum. 23 teacher candidates at 2014-2015 school year constituted the sampling of the study in which scanning model based on qualitative research technique was used by performing document analysis. Data collection tool of the research was the files prepared with 51 and nine open ended questions including the subject content and general properties of the textbook. It was concluded that the textbook was sufficient for being life context -based, language, activity-based and student-centered approximation, development of social and inquiry skills, and was insufficient for referring educational gains of the Curriculum, involving activities, projects and homework about application. Activities and applications about affective area, such tools for assessment and evaluation practices as concept map, concept network and semantic analysis table may be involved in the textbook.
How well do we know the incoming solar infrared radiation?
NASA Astrophysics Data System (ADS)
Elsey, Jonathan; Coleman, Marc; Gardiner, Tom; Shine, Keith
2017-04-01
The solar spectral irradiance (SSI) has been identified as a key climate variable by the Global Climate Observing System (Bojinski et al. 2014, Bull. Amer. Meteor. Soc.). It is of importance in the modelling of atmospheric radiative transfer, and the quantification of the global energy budget. However, in the near-infrared spectral region (between 2000-10000 cm-1) there exists a discrepancy of 7% between spectra measured from the space-based SOLSPEC instrument (Thuillier et al. 2015, Solar Physics) and those from a ground-based Langley technique (Bolseé et al. 2014, Solar Physics). This same difference is also present between different analyses of the SOLSPEC data. This work aims to reconcile some of these differences by presenting an estimate of the near-infrared SSI obtained from ground-based measurements taken using an absolutely calibrated Fourier transform spectrometer. Spectra are obtained both using the Langley technique and by direct comparison with a radiative transfer model, with appropriate handling of both aerosol scattering and molecular continuum absorption. Particular focus is dedicated to the quantification of uncertainty in these spectra, from both the inherent uncertainty in the measurement setup and that from the use of the radiative transfer code and its inputs.
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al
2017-04-01
Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
A high-frequency warm shallow water acoustic communications channel model and measurements.
Chitre, Mandar
2007-11-01
Underwater acoustic communication is a core enabling technology with applications in ocean monitoring using remote sensors and autonomous underwater vehicles. One of the more challenging underwater acoustic communication channels is the medium-range very shallow warm-water channel, common in tropical coastal regions. This channel exhibits two key features-extensive time-varying multipath and high levels of non-Gaussian ambient noise due to snapping shrimp-both of which limit the performance of traditional communication techniques. A good understanding of the communications channel is key to the design of communication systems. It aids in the development of signal processing techniques as well as in the testing of the techniques via simulation. In this article, a physics-based channel model for the very shallow warm-water acoustic channel at high frequencies is developed, which are of interest to medium-range communication system developers. The model is based on ray acoustics and includes time-varying statistical effects as well as non-Gaussian ambient noise statistics observed during channel studies. The model is calibrated and its accuracy validated using measurements made at sea.
A Multi-Level Model of Moral Thinking Based on Neuroscience and Moral Psychology
ERIC Educational Resources Information Center
Jeong, Changwoo; Han, Hye Min
2011-01-01
Developments in neurobiology are providing new insights into the biological and physical features of human thinking, and brain-activation imaging methods such as functional magnetic resonance imaging have become the most dominant research techniques to approach the biological part of thinking. With the aid of neurobiology, there also have been…
Cascade process modeling with mechanism-based hierarchical neural networks.
Cong, Qiumei; Yu, Wen; Chai, Tianyou
2010-02-01
Cascade process, such as wastewater treatment plant, includes many nonlinear sub-systems and many variables. When the number of sub-systems is big, the input-output relation in the first block and the last block cannot represent the whole process. In this paper we use two techniques to overcome the above problem. Firstly we propose a new neural model: hierarchical neural networks to identify the cascade process; then we use serial structural mechanism model based on the physical equations to connect with neural model. A stable learning algorithm and theoretical analysis are given. Finally, this method is used to model a wastewater treatment plant. Real operational data of wastewater treatment plant is applied to illustrate the modeling approach.
Lim, Yi-Je; Deo, Dhanannjay; Singh, Tejinder P; Jones, Daniel B; De, Suvranu
2009-06-01
Development of a laparoscopic surgery simulator that delivers high-fidelity visual and haptic (force) feedback, based on the physical models of soft tissues, requires the use of empirical data on the mechanical behavior of intra-abdominal organs under the action of external forces. As experiments on live human patients present significant risks, the use of cadavers presents an alternative. We present techniques of measuring and modeling the mechanical response of human cadaveric tissue for the purpose of developing a realistic model. The major contribution of this paper is the development of physics-based models of soft tissues that range from linear elastic models to nonlinear viscoelastic models which are efficient for application within the framework of a real-time surgery simulator. To investigate the in situ mechanical, static, and dynamic properties of intra-abdominal organs, we have developed a high-precision instrument by retrofitting a robotic device from Sensable Technologies (position resolution of 0.03 mm) with a six-axis Nano 17 force-torque sensor from ATI Industrial Automation (force resolution of 1/1,280 N along each axis), and used it to apply precise displacement stimuli and record the force response of liver and stomach of ten fresh human cadavers. The mean elastic modulus of liver and stomach is estimated as 5.9359 kPa and 1.9119 kPa, respectively over the range of indentation depths tested. We have also obtained the parameters of a quasilinear viscoelastic (QLV) model to represent the nonlinear viscoelastic behavior of the cadaver stomach and liver over a range of indentation depths and speeds. The models are found to have an excellent goodness of fit (with R (2) > 0.99). The data and models presented in this paper together with additional ones based on the principles presented in this paper would result in realistic physics-based surgical simulators.
Acoustic classification of zooplankton
NASA Astrophysics Data System (ADS)
Martin Traykovski, Linda V.
1998-11-01
Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Technical Reports Server (NTRS)
Greenwood, Eric, II; Schmitz, Fredric H.
2010-01-01
A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.
Photogrammetric techniques for aerospace applications
NASA Astrophysics Data System (ADS)
Liu, Tianshu; Burner, Alpheus W.; Jones, Thomas W.; Barrows, Danny A.
2012-10-01
Photogrammetric techniques have been used for measuring the important physical quantities in both ground and flight testing including aeroelastic deformation, attitude, position, shape and dynamics of objects such as wind tunnel models, flight vehicles, rotating blades and large space structures. The distinct advantage of photogrammetric measurement is that it is a non-contact, global measurement technique. Although the general principles of photogrammetry are well known particularly in topographic and aerial survey, photogrammetric techniques require special adaptation for aerospace applications. This review provides a comprehensive and systematic summary of photogrammetric techniques for aerospace applications based on diverse sources. It is useful mainly for aerospace engineers who want to use photogrammetric techniques, but it also gives a general introduction for photogrammetrists and computer vision scientists to new applications.
NASA Astrophysics Data System (ADS)
Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.
2015-12-01
Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.
NASA Technical Reports Server (NTRS)
Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.
2015-01-01
An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.
Simultaneous computation of jet turbulence and noise
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.
1989-01-01
The existing flow computation methods, wave computation techniques, and theories based on noise source models are reviewed in order to assess the capabilities of numerical techniques to compute jet turbulence noise and understand the physical mechanisms governing it over a range of subsonic and supersonic nozzle exit conditions. In particular, attention is given to (1) methods for extrapolating near field information, obtained from flow computations, to the acoustic far field and (2) the numerical solution of the time-dependent Lilley equation.
Microseismic techniques for avoiding induced seismicity during fluid injection
Matzel, Eric; White, Joshua; Templeton, Dennise; ...
2014-01-01
The goal of this research is to develop a fundamentally better approach to geological site characterization and early hazard detection. We combine innovative techniques for analyzing microseismic data with a physics-based inversion model to forecast microseismic cloud evolution. The key challenge is that faults at risk of slipping are often too small to detect during the site characterization phase. Our objective is to devise fast-running methodologies that will allow field operators to respond quickly to changing subsurface conditions.
Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach
NASA Astrophysics Data System (ADS)
Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.
2017-12-01
One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.
A Mode-Shape-Based Fault Detection Methodology for Cantilever Beams
NASA Technical Reports Server (NTRS)
Tejada, Arturo
2009-01-01
An important goal of NASA's Internal Vehicle Health Management program (IVHM) is to develop and verify methods and technologies for fault detection in critical airframe structures. A particularly promising new technology under development at NASA Langley Research Center is distributed Bragg fiber optic strain sensors. These sensors can be embedded in, for instance, aircraft wings to continuously monitor surface strain during flight. Strain information can then be used in conjunction with well-known vibrational techniques to detect faults due to changes in the wing's physical parameters or to the presence of incipient cracks. To verify the benefits of this technology, the Formal Methods Group at NASA LaRC has proposed the use of formal verification tools such as PVS. The verification process, however, requires knowledge of the physics and mathematics of the vibrational techniques and a clear understanding of the particular fault detection methodology. This report presents a succinct review of the physical principles behind the modeling of vibrating structures such as cantilever beams (the natural model of a wing). It also reviews two different classes of fault detection techniques and proposes a particular detection method for cracks in wings, which is amenable to formal verification. A prototype implementation of these methods using Matlab scripts is also described and is related to the fundamental theoretical concepts.
Distillation Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange,Kevin E.; Conger, Bruce; Anderson, Molly
2010-01-01
Gravity-based distillation methods may be applied to the purification of wastewater on the lunar base. These solutions to water processing are robust physical separation techniques, which may be more advantageous than many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams.
NASA Astrophysics Data System (ADS)
Greenwald, Jared
Any good physical theory must resolve current experimental data as well as offer predictions for potential searches in the future. The Standard Model of particle physics, Grand Unied Theories, Minimal Supersymmetric Models and Supergravity are all attempts to provide such a framework. However, they all lack the ability to predict many of the parameters that each of the theories utilize. String theory may yield a solution to this naturalness (or self-predictiveness) problem as well as offer a unifed theory of gravity. Studies in particle physics phenomenology based on perturbative low energy analysis of various string theories can help determine the candidacy of such models. After a review of principles and problems leading up to our current understanding of the universe, we will discuss some of the best particle physics model building techniques that have been developed using string theory. This will culminate in the introduction of a novel approach to a computational, systematic analysis of the various physical phenomena that arise from these string models. We focus on the necessary assumptions, complexity and open questions that arise while making a fully-automated at direction analysis program.
NASA Astrophysics Data System (ADS)
Bulovich, S. V.; Smirnov, E. M.
2018-05-01
The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.
Dictionary-based image reconstruction for superresolution in integrated circuit imaging.
Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim
2015-06-01
Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.
ROENTGEN: case-based reasoning and radiation therapy planning.
Berger, J.
1992-01-01
ROENTGEN is a design assistant for radiation therapy planning which uses case-based reasoning, an artificial intelligence technique. It learns both from specific problem-solving experiences and from direct instruction from the user. The first sort of learning is the normal case-based method of storing problem solutions so that they can be reused. The second sort is necessary because ROENTGEN does not, initially, have an internal model of the physics of its problem domain. This dependence on explicit user instruction brings to the forefront representational questions regarding indexing, failure definition, failure explanation and repair. This paper presents the techniques used by ROENTGEN in its knowledge acquisition and design activities. PMID:1482869
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun
2018-02-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.
Geidl, Wolfgang; Semrau, Jana; Pfeifer, Klaus
2014-01-01
The purpose of this perspective is (1) to incorporate recent psychological health behaviour change (HBC) theories into exercise therapeutic programmes, and (2) to introduce the International Classification of Functioning (ICF)-based concept of a behavioural exercise therapy (BET). Relevant personal modifiable factors of physical activity (PA) were identified based on three recent psychological HBC theories. Following the principles of intervention mapping, a matrix of proximal programme objectives specifies desirable parameter values for each personal factor. As a result of analysing reviews on behavioural techniques and intervention programmes of the German rehabilitation setting, we identified exercise-related techniques that impact the personal determinants. Finally, the techniques were integrated into an ICF-based BET concept. Individuals' attitudes, skills, emotions, beliefs and knowledge are important personal factors of PA behaviour. BET systematically addresses these personal factors by a systematic combination of adequate exercise contents with related behavioural techniques. The presented 28 intervention techniques serve as a theory-driven "tool box" for designing complex BET programmes to promote PA. The current paper highlights the usefulness of theory-based integrative research in the field of exercise therapy, offers explicit methods and contents for physical therapists to promote PA behaviour, and introduces the ICF-based conceptual idea of a BET. Implications for Rehabilitation Irrespective of the clients' indication, therapeutic exercise programmes should incorporate effective, theory-based approaches to promote physical activity. Central determinants of physical activity behaviour are a number of personal factors: individuals' attitudes, skills, emotions, beliefs and knowledge. Clinicians implementing exercise therapy should set it within a wider theoretical framework including the personal factors that influence physical activity. To increase exercise-adherence and promote long-term physical activity behaviour change, the concept of a behavioural exercise therapy (BET) offers a theory-based approach to systematically address relevant personal factors with a combination of adequate contents of exercise with exercise-related techniques of behaviour change.
Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model
NASA Astrophysics Data System (ADS)
Arumugam, S.; Libera, D.
2017-12-01
Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.
Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less
Lim, Hojun; Abdeljawad, Fadi; Owen, Steven J.; ...
2016-04-25
Here, the mechanical properties of materials systems are highly influenced by various features at the microstructural level. The ability to capture these heterogeneities and incorporate them into continuum-scale frameworks of the deformation behavior is considered a key step in the development of complex non-local models of failure. In this study, we present a modeling framework that incorporates physically-based realizations of polycrystalline aggregates from a phase field (PF) model into a crystal plasticity finite element (CP-FE) framework. Simulated annealing via the PF model yields ensembles of materials microstructures with various grain sizes and shapes. With the aid of a novel FEmore » meshing technique, FE discretizations of these microstructures are generated, where several key features, such as conformity to interfaces, and triple junction angles, are preserved. The discretizations are then used in the CP-FE framework to simulate the mechanical response of polycrystalline α-iron. It is shown that the conformal discretization across interfaces reduces artificial stress localization commonly observed in non-conformal FE discretizations. The work presented herein is a first step towards incorporating physically-based microstructures in lieu of the overly simplified representations that are commonly used. In broader terms, the proposed framework provides future avenues to explore bridging models of materials processes, e.g. additive manufacturing and microstructure evolution of multi-phase multi-component systems, into continuum-scale frameworks of the mechanical properties.« less
A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows
NASA Astrophysics Data System (ADS)
Meldi, M.; Poux, A.
2017-10-01
A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
Materials used to simulate physical properties of human skin.
Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M
2016-02-01
For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Vibroacoustic optimization using a statistical energy analysis model
NASA Astrophysics Data System (ADS)
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ullah, Siraj; Khan, M. Ijaz; Alsaedi, A.; Zaigham Zia, Q. M.
2018-03-01
Here modeling and computations are presented to introduce the novel concept of Darcy-Forchheimer three-dimensional flow of water-based carbon nanotubes with nonlinear thermal radiation and heat generation/absorption. Bidirectional stretching surface induces the flow. Darcy's law is commonly replace by Forchheimer relation. Xue model is implemented for nonliquid transport mechanism. Nonlinear formulation based upon conservation laws of mass, momentum and energy is first modeled and then solved by optimal homotopy analysis technique. Optimal estimations of auxiliary variables are obtained. Importance of influential variables on the velocity and thermal fields is interpreted graphically. Moreover velocity and temperature gradients are discussed and analyzed. Physical interpretation of influential variables is examined.
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-08-08
The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.
Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun
2017-12-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.
Embree, William N.; Wiltshire, Denise A.
1978-01-01
Abstracts of 177 selected publications on water movement in estuaries, particularly the Hudson River estuary, are compiled for reference in Hudson River studies. Subjects represented are the hydraulic, chemical, and physical characteristics of estuarine waters, estuarine modeling techniques, and methods of water-data collection and analysis. Summaries are presented in five categories: Hudson River estuary studies; hydrodynamic-model studies; water-quality-model studies; reports on data-collection equipment and methods; and bibliographies, literature reviews, conference proceedings, and textbooks. An author index is included. Omitted are most works published before 1965, environmental-impact statements, theses and dissertations, policy or planning reports, regional or economic reports, ocean studies, studies based on physical models, and foreign studies. (Woodard-USGS)
Huang, Hui-Chun; Shanklin, Carol W
2008-05-01
The United States is experiencing remarkable growth in the elderly population, which provides both opportunities and challenges for assisted-living facilities. The objective of this study was to explore how service management influences residents' actual food consumption in assisted-living facilities. Physical factors influencing residents' service evaluation and food consumption also were investigated. A total of 394 questionnaires were distributed to assisted-living residents in seven randomly selected facilities. The questionnaire was developed based on an in-depth literature review and pilot study. Residents' perceived quality evaluations, satisfaction, and physical constraints were measured. Residents' actual food consumption was measured using a plate waste technique. A total of 118 residents in five facilities completed both questionnaires and food consumption assessments. Descriptive, multivariate analyses and structural equation modeling techniques were employed. Service management, including food and service quality and customer satisfaction, was found to significantly influence residents' food consumption. Physical constraints associated with aging, including a decline in health status, chewing problems, sensory loss, and functional disability, also significantly influenced residents' food consumption. A significant relationship was found between physical constraints and customer satisfaction. Foodservice that provides good food and service quality increases customer satisfaction and affects residents' actual food consumption. Physical constraints also influence residents' food consumption directly, or indirectly through satisfaction. The findings suggest that food and nutrition professionals in assisted-living should consider the physical profiles of their residents to enhance residents' satisfaction and nutrient intake. Recommendations for exploring residents' perspectives are discussed.
NASA Astrophysics Data System (ADS)
England, John F.; Julien, Pierre Y.; Velleux, Mark L.
2014-03-01
Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.
2013-01-01
Objective Several physical activity interventions have been effective in improving the health outcomes of breast cancer survivors. However, few interventions have provided detailed descriptions regarding how such interventions work. To develop evidence-based practice in this field, detailed descriptions of intervention development and delivery is needed. This paper aims to (1) describe the theory-and evidence-based development of the Move More for Life program, a physical activity program for breast cancer survivors; and (2) serve as an exemplar for theory-based applied research. Method The program-planning model outlined by Kreuter and colleagues was used to develop the computer-tailored intervention. Results The tailoring guide developed by Kreuter and colleagues served as a useful program planning tool in terms of integrating theory and evidence-based best practice into intervention strategies. Overall, participants rated the intervention positively, with the majority reporting that the tailored materials caught their attention, were personally relevant to them, and were useful for helping them to change their behaviour. However, there was considerable room for improvement. Conclusion The Move More for Life program is an example of a theory-based, low-cost and potentially sustainable strategy to physical activity promotion and may stand as an exemplar for Social Cognitive Theory-based applied research. By providing a detailed description of the development of the Move More for Life program, a critical evaluation of the working mechanisms of the intervention is possible, and will guide researchers in the replication or adaption and re-application of the specified techniques. This has potential implications for researchers examining physical activity promotion among cancer survivors and for researchers exploring distance-based physical activity promotion techniques among other populations. Trial registrations Australian New Zealand Clinical Trials Registry (ANZCTR) identifier: ACTRN12611001061921. PMID:24192320
Supersonic reacting internal flowfields
NASA Astrophysics Data System (ADS)
Drummond, J. P.
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
Supersonic reacting internal flow fields
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
1989-01-01
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
Dwivedi, Dipankar; Mohanty, Binayak P.; Lesikar, Bruce J.
2013-01-01
Microbes have been identified as a major contaminant of water resources. Escherichia coli (E. coli) is a commonly used indicator organism. It is well recognized that the fate of E. coli in surface water systems is governed by multiple physical, chemical, and biological factors. The aim of this work is to provide insight into the physical, chemical, and biological factors along with their interactions that are critical in the estimation of E. coli loads in surface streams. There are various models to predict E. coli loads in streams, but they tend to be system or site specific or overly complex without enhancing our understanding of these factors. Hence, based on available data, a Bayesian Neural Network (BNN) is presented for estimating E. coli loads based on physical, chemical, and biological factors in streams. The BNN has the dual advantage of overcoming the absence of quality data (with regards to consistency in data) and determination of mechanistic model parameters by employing a probabilistic framework. This study evaluates whether the BNN model can be an effective alternative tool to mechanistic models for E. coli loads estimation in streams. For this purpose, a comparison with a traditional model (LOADEST, USGS) is conducted. The models are compared for estimated E. coli loads based on available water quality data in Plum Creek, Texas. All the model efficiency measures suggest that overall E. coli loads estimations by the BNN model are better than the E. coli loads estimations by the LOADEST model on all the three occasions (three-fold cross validation). Thirteen factors were used for estimating E. coli loads with the exhaustive feature selection technique, which indicated that six of thirteen factors are important for estimating E. coli loads. Physical factors included temperature and dissolved oxygen; chemical factors include phosphate and ammonia; biological factors include suspended solids and chlorophyll. The results highlight that the LOADEST model estimates E. coli loads better in the smaller ranges, whereas the BNN model estimates E. coli loads better in the higher ranges. Hence, the BNN model can be used to design targeted monitoring programs and implement regulatory standards through TMDL programs. PMID:24511166
Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui
2018-04-16
A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.
Upscaling soil saturated hydraulic conductivity from pore throat characteristics
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Hunt, Allen G.; Skaggs, Todd H.; Jarvis, Nicholas
2017-06-01
Upscaling and/or estimating saturated hydraulic conductivity Ksat at the core scale from microscopic/macroscopic soil characteristics has been actively under investigation in the hydrology and soil physics communities for several decades. Numerous models have been developed based on different approaches, such as the bundle of capillary tubes model, pedotransfer functions, etc. In this study, we apply concepts from critical path analysis, an upscaling technique first developed in the physics literature, to estimate saturated hydraulic conductivity at the core scale from microscopic pore throat characteristics reflected in capillary pressure data. With this new model, we find Ksat estimations to be within a factor of 3 of the average measured saturated hydraulic conductivities reported by Rawls et al. (1982) for the eleven USDA soil texture classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Qingge; Song, Gian; Gorti, Sarma B.
Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less
Xie, Qingge; Song, Gian; Gorti, Sarma B.; ...
2018-02-21
Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.
NASA Astrophysics Data System (ADS)
Saari, Markus; Rossi, Pekka; Blomberg von der Geest, Kalle; Mäkinen, Ari; Postila, Heini; Marttila, Hannu
2017-04-01
High metal concentrations in natural waters is one of the key environmental and health problems globally. Continuous in-situ analysis of metals from runoff water is technically challenging but essential for the better understanding of processes which lead to pollutant transport. Currently, typical analytical methods for monitoring elements in liquids are off-line laboratory methods such as ICP-OES (Inductively Coupled Plasma Optical Emission Spectroscopy) and ICP-MS (ICP combined with a mass spectrometer). Disadvantage of the both techniques is time consuming sample collection, preparation, and off-line analysis at laboratory conditions. Thus use of these techniques lack possibility for real-time monitoring of element transport. We combined a novel high resolution on-line metal concentration monitoring with catchment scale physical hydrological modelling in Mustijoki river in Southern Finland in order to study dynamics of processes and form a predictive warning system for leaching of metals. A novel on-line measurement technique based on micro plasma emission spectroscopy (MPES) is tested for on-line detection of selected elements (e.g. Na, Mg, Al, K, Ca, Fe, Ni, Cu, Cd and Pb) in runoff waters. The preliminary results indicate that MPES can sufficiently detect and monitor metal concentrations from river water. Water and Soil Assessment Tool (SWAT) catchment scale model was further calibrated with high resolution metal concentration data. We show that by combining high resolution monitoring and catchment scale physical based modelling, further process studies and creation of early warning systems, for example to optimization of drinking water uptake from rivers, can be achieved.
Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility
NASA Astrophysics Data System (ADS)
Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.
2017-12-01
The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.
Confocal laser feedback tomography for skin cancer detection
Mowla, Alireza; Du, Benjamin Wensheng; Taimre, Thomas; Bertling, Karl; Wilson, Stephen; Soyer, H. Peter; Rakić, Aleksandar D.
2017-01-01
Tomographic imaging of soft tissue such as skin has a potential role in cancer detection. The penetration of infrared wavelengths makes a confocal approach based on laser feedback interferometry feasible. We present a compact system using a semiconductor laser as both transmitter and receiver. Numerical and physical models based on the known optical properties of keratinocyte cancers were developed. We validated the technique on three phantoms containing macro-structural changes in optical properties. Experimental results were in agreement with numerical simulations and structural changes were evident which would permit discrimination of healthy tissue and tumour. Furthermore, cancer type discrimination was also able to be visualized using this imaging technique. PMID:28966845
Confocal laser feedback tomography for skin cancer detection.
Mowla, Alireza; Du, Benjamin Wensheng; Taimre, Thomas; Bertling, Karl; Wilson, Stephen; Soyer, H Peter; Rakić, Aleksandar D
2017-09-01
Tomographic imaging of soft tissue such as skin has a potential role in cancer detection. The penetration of infrared wavelengths makes a confocal approach based on laser feedback interferometry feasible. We present a compact system using a semiconductor laser as both transmitter and receiver. Numerical and physical models based on the known optical properties of keratinocyte cancers were developed. We validated the technique on three phantoms containing macro-structural changes in optical properties. Experimental results were in agreement with numerical simulations and structural changes were evident which would permit discrimination of healthy tissue and tumour. Furthermore, cancer type discrimination was also able to be visualized using this imaging technique.
Deformation and Fabric in Compacted Clay Soils
NASA Astrophysics Data System (ADS)
Wensrich, C. M.; Pineda, J.; Luzin, V.; Suwal, L.; Kisi, E. H.; Allameh-Haery, H.
2018-05-01
Hydromechanical anisotropy of clay soils in response to deformation or deposition history is related to the micromechanics of platelike clay particles and their orientations. In this article, we examine the relationship between microstructure, deformation, and moisture content in kaolin clay using a technique based on neutron scattering. This technique allows for the direct characterization of microstructure within representative samples using traditional measures such as orientation density and soil fabric tensor. From this information, evidence for a simple relationship between components of the deviatoric strain tensor and the deviatoric fabric tensor emerge. This relationship may provide a physical basis for future anisotropic constitutive models based on the micromechanics of these materials.
Data-adaptive Harmonic Decomposition and Real-time Prediction of Arctic Sea Ice Extent
NASA Astrophysics Data System (ADS)
Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael
2017-04-01
Decline in the Arctic sea ice extent (SIE) has profound socio-economic implications and is a focus of active scientific research. Of particular interest is prediction of SIE on subseasonal time scales, i.e. from early summer into fall, when sea ice coverage in Arctic reaches its minimum. However, subseasonal forecasting of SIE is very challenging due to the high variability of ocean and atmosphere over Arctic in summer, as well as shortness of observational data and inadequacies of the physics-based models to simulate sea-ice dynamics. The Sea Ice Outlook (SIO) by Sea Ice Prediction Network (SIPN, http://www.arcus.org/sipn) is a collaborative effort to facilitate and improve subseasonal prediction of September SIE by physics-based and data-driven statistical models. Data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) techniques [Chekroun and Kondrashov, 2017], have been successfully applied to the nonlinear stochastic modeling, as well as retrospective and real-time forecasting of Multisensor Analyzed Sea Ice Extent (MASIE) dataset in key four Arctic regions. In particular, DAH-MSLM predictions outperformed most statistical models and physics-based models in real-time 2016 SIO submissions. The key success factors are associated with DAH ability to disentangle complex regional dynamics of MASIE by data-adaptive harmonic spatio-temporal patterns that reduce the data-driven modeling effort to elemental MSLMs stacked per frequency with fixed and small number of model coefficients to estimate.
ERIC Educational Resources Information Center
Storer, I. J.; Campbell, R. I.
2012-01-01
Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…
A study of hydriding kinetics of metal hydrides using a physically based model
NASA Astrophysics Data System (ADS)
Voskuilen, Tyler G.
The reaction of hydrogen with metals to form metal hydrides has numerous potential energy storage and management applications. The metal hydrogen system has a high volumetric energy density and is often reversible with a high cycle life. The stored hydrogen can be used to produce energy through combustion, reaction in a fuel cell, or electrochemically in metal hydride batteries. The high enthalpy of the metal-hydrogen reaction can also be used for rapid heat removal or delivery. However, improving the often poor gravimetric performance of such systems through the use of lightweight metals usually comes at the cost of reduced reaction rates or the requirement of pressure and temperature conditions far from the desired operating conditions. In this work, a 700 bar Sievert system was developed at the Purdue Hydrogen Systems Laboratory to study the kinetic and thermodynamic behavior of high pressure hydrogen absorption under near-ambient temperatures. This system was used to determine the kinetic and thermodynamic properties of TiCrMn, an intermetallic metal hydride of interest due to its ambient temperature performance for vehicular applications. A commonly studied intermetallic hydride, LaNi5, was also characterized as a base case for the phase field model. The analysis of the data obtained from such a system necessitate the use of specialized techniques to decouple the measured reaction rates from experimental conditions. These techniques were also developed as a part of this work. Finally, a phase field model of metal hydride formation in mass-transport limited interstitial solute reactions based on the regular solution model was developed and compared with measured kinetics of LaNi5 and TiCrMn. This model aided in the identification of key reaction features and was used to verify the proposed technique for the analysis of gas-solid reaction rates determined volumetrically. Additionally, the phase field model provided detailed quantitative predictions of the effects of multidimensional phase growth and transitions between rate-limiting processes on the experimentally determined reaction rates. Unlike conventional solid state reaction analysis methods, this model relies fully on rate parameters based on the physical mechanisms occurring in the hydride reaction and can be extended to reactions in any dimension.
NASA Astrophysics Data System (ADS)
Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.
2017-10-01
In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.
Perspective: Reaches of chemical physics in biology.
Gruebele, Martin; Thirumalai, D
2013-09-28
Chemical physics as a discipline contributes many experimental tools, algorithms, and fundamental theoretical models that can be applied to biological problems. This is especially true now as the molecular level and the systems level descriptions begin to connect, and multi-scale approaches are being developed to solve cutting edge problems in biology. In some cases, the concepts and tools got their start in non-biological fields, and migrated over, such as the idea of glassy landscapes, fluorescence spectroscopy, or master equation approaches. In other cases, the tools were specifically developed with biological physics applications in mind, such as modeling of single molecule trajectories or super-resolution laser techniques. In this introduction to the special topic section on chemical physics of biological systems, we consider a wide range of contributions, all the way from the molecular level, to molecular assemblies, chemical physics of the cell, and finally systems-level approaches, based on the contributions to this special issue. Chemical physicists can look forward to an exciting future where computational tools, analytical models, and new instrumentation will push the boundaries of biological inquiry.
Perspective: Reaches of chemical physics in biology
Gruebele, Martin; Thirumalai, D.
2013-01-01
Chemical physics as a discipline contributes many experimental tools, algorithms, and fundamental theoretical models that can be applied to biological problems. This is especially true now as the molecular level and the systems level descriptions begin to connect, and multi-scale approaches are being developed to solve cutting edge problems in biology. In some cases, the concepts and tools got their start in non-biological fields, and migrated over, such as the idea of glassy landscapes, fluorescence spectroscopy, or master equation approaches. In other cases, the tools were specifically developed with biological physics applications in mind, such as modeling of single molecule trajectories or super-resolution laser techniques. In this introduction to the special topic section on chemical physics of biological systems, we consider a wide range of contributions, all the way from the molecular level, to molecular assemblies, chemical physics of the cell, and finally systems-level approaches, based on the contributions to this special issue. Chemical physicists can look forward to an exciting future where computational tools, analytical models, and new instrumentation will push the boundaries of biological inquiry. PMID:24089712
Metallic Rotor Sizing and Performance Model for Flywheel Systems
NASA Technical Reports Server (NTRS)
Moore, Camille J.; Kraft, Thomas G.
2012-01-01
The NASA Glenn Research Center (GRC) is developing flywheel system requirements and designs for terrestrial and spacecraft applications. Several generations of flywheels have been designed and tested at GRC using in-house expertise in motors, magnetic bearings, controls, materials and power electronics. The maturation of a flywheel system from the concept phase to the preliminary design phase is accompanied by maturation of the Integrated Systems Performance model, where estimating relationships are replaced by physics based analytical techniques. The modeling can incorporate results from engineering model testing and emerging detail from the design process.
Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael
2018-05-01
Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.
Current status and prospects of nuclear physics research based on tracking techniques
NASA Astrophysics Data System (ADS)
Alekseev, V. A.; Alexandrov, A. B.; Bagulya, A. V.; Chernyavskiy, M. M.; Goncharova, L. A.; Gorbunov, S. A.; Kalinina, G. V.; Konovalova, N. S.; Okatyeva, N. M.; Pavlova, T. A.; Polukhina, N. G.; Shchedrina, T. V.; Starkov, N. I.; Tioukov, V. E.; Vladymirov, M. S.; Volkov, A. E.
2017-01-01
Results of nuclear physics research made using track detectors are briefly reviewed. Advantages and prospects of the track detection technique in particle physics, neutrino physics, astrophysics and other fields are discussed on the example of the results of the search for direct origination of tau neutrino in a muon neutrino beam within the framework of the international experiment OPERA (Oscillation Project with Emulsion-tRacking Apparatus) and works on search for superheavy nuclei in nature on base of their tracks in meteoritic olivine crystals. The spectra of superheavy elements in galactic cosmic rays are presented. Prospects of using the track detection technique in fundamental and applied research are reported.
Predicting remaining life by fusing the physics of failure modeling with diagnostics
NASA Astrophysics Data System (ADS)
Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.
2004-03-01
Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.
Novel residual-based large eddy simulation turbulence models for incompressible magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Sondak, David
The goal of this work was to develop, introduce, and test a promising computational paradigm for the development of turbulence models for incompressible magnetohydrodynamics (MHD). MHD governs the behavior of an electrically conducting fluid in the presence of an external electromagnetic (EM) field. The incompressible MHD model is used in many engineering and scientific disciplines from the development of nuclear fusion as a sustainable energy source to the study of space weather and solar physics. Many interesting MHD systems exhibit the phenomenon of turbulence which remains an elusive problem from all scientific perspectives. This work focuses on the computational perspective and proposes techniques that enable the study of systems involving MHD turbulence. Direct numerical simulation (DNS) is not a feasible approach for studying MHD turbulence. In this work, turbulence models for incompressible MHD were developed from the variational multiscale (VMS) formulation wherein the solution fields were decomposed into resolved and unresolved components. The unresolved components were modeled with a term that is proportional to the residual of the resolved scales. Two additional MHD models were developed based off of the VMS formulation: a residual-based eddy viscosity (RBEV) model and a mixed model that partners the VMS formulation with the RBEV model. These models are endowed with several special numerical and physics features. Included in the numerical features is the internal numerical consistency of each of the models. Physically, the new models are able to capture desirable MHD physics such as the inverse cascade of magnetic energy and the subgrid dynamo effect. The models were tested with a Fourier-spectral numerical method and the finite element method (FEM). The primary test problem was the Taylor-Green vortex. Results comparing the performance of the new models to DNS were obtained. The performance of the new models was compared to classic and cutting-edge dynamic Smagorinsky eddy viscosity (DSEV) models. The new models typically outperform the classical models.
Optimization of the ANFIS using a genetic algorithm for physical work rate classification.
Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali
2018-03-13
Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher
2015-07-01
Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.
Data Assimilation Into Physics-Based Models Via Kalman Filters
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Sojka, J. J.
2002-12-01
The magnetosphere-ionosphere-thermosphere (M-I-T) system is a highly dynamic, coupled, and nonlinear system that can vary significantly from hour to hour at any location. The coupling is particularly strong during geomagnetic storms and substorms, but there are appreciable time delays associated with the transfer of mass, momentum, and energy between the domains. Therefore, both global physics-based models and vast observational data sets are needed to elucidate the dynamics, energetics, and coupling in the M-I-T system. Fortunately, during the coming decade, tens of millions of measurements of the global M-I-T system could become available from a variety of in situ and remote sensing instruments. Some of the measurements will provide direct information about the state variables (densities, drift velocities, and temperatures), while others will provide indirect information, such as optical emissions and magnetic perturbations. The data sources available could include: thousands of ground-based GPS Total Electron Content (TEC) receivers; a world-wide network of ionosondes; hundreds of magnetometers both on the ground and in space; occultations from the COSMIC Satellites, numerous ground-based tomography chains; auroral images from the POLAR Satellite; images of the magnetosphere and plasmasphere from the IMAGE Satellite; SuperDARN radar measurements in the polar regions; the Living With a Star (LWS) Solar Dynamics Observatory and the LWS Radiation Belt and Ionosphere-Thermosphere Storm Probes; and the world-wide network of incoherent scatter radars. To optimize the scientific return and to provide specifications and forecasts for societal applications, the global models and data must be combined in an optimum way. A powerful way of assimilating multiple data types into a time-dependent, physics-based, numerical model is via a Kalman filter. The basic principle of this approach is to combine measurements from multiple instrument types with the information obtained from a physics-based model, taking into account the uncertainties in both the model and measurements. The advantages of this technique and the data sources that might be available will be discussed.
Correction of defective pixels for medical and space imagers based on Ising Theory
NASA Astrophysics Data System (ADS)
Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer
2014-09-01
We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.
NSF's Perspective on Space Weather Research for Building Forecasting Capabilities
NASA Astrophysics Data System (ADS)
Bisi, M. M.; Pulkkinen, A. A.; Bisi, M. M.; Pulkkinen, A. A.; Webb, D. F.; Oughton, E. J.; Azeem, S. I.
2017-12-01
Space weather research at the National Science Foundation (NSF) is focused on scientific discovery and on deepening knowledge of the Sun-Geospace system. The process of maturation of knowledge base is a requirement for the development of improved space weather forecast models and for the accurate assessment of potential mitigation strategies. Progress in space weather forecasting requires advancing in-depth understanding of the underlying physical processes, developing better instrumentation and measurement techniques, and capturing the advancements in understanding in large-scale physics based models that span the entire chain of events from the Sun to the Earth. This presentation will provide an overview of current and planned programs pertaining to space weather research at NSF and discuss the recommendations of the Geospace Section portfolio review panel within the context of space weather forecasting capabilities.
High-energy physics software parallelization using database techniques
NASA Astrophysics Data System (ADS)
Argante, E.; van der Stok, P. D. V.; Willers, I.
1997-02-01
A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.
An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle
NASA Astrophysics Data System (ADS)
Gidden, Matthew J.
Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.
NASA Technical Reports Server (NTRS)
1997-01-01
This CP contains the extended abstracts and presentation figures of 36 papers presented at the PPM and Other Propulsion R&T Conference. The focus of the research described in these presentations is on materials and structures technologies that are parts of the various projects within the NASA Aeronautics Propulsion Systems Research and Technology Base Program. These projects include Physics and Process Modeling; Smart, Green Engine; Fast, Quiet Engine; High Temperature Engine Materials Program; and Hybrid Hyperspeed Propulsion. Also presented were research results from the Rotorcraft Systems Program and work supported by the NASA Lewis Director's Discretionary Fund. Authors from NASA Lewis Research Center, industry, and universities conducted research in the following areas: material processing, material characterization, modeling, life, applied life models, design techniques, vibration control, mechanical components, and tribology. Key issues, research accomplishments, and future directions are summarized in this publication.
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Polka, Lesley A.; Polycarpou, Anastasis C.
1994-01-01
Formulations for scattering from the coated plate and the coated dihedral corner reflector are included. A coated plate model based upon the Uniform Theory of Diffraction (UTD) for impedance wedges was presented in the last report. In order to resolve inaccuracies and discontinuities in the predicted patterns using the UTD-based model, an improved model that uses more accurate diffraction coefficients is presented. A Physical Optics (PO) model for the coated dihedral corner reflector is presented as an intermediary step in developing a high-frequency model for this structure. The PO model is based upon the reflection coefficients for a metal-backed lossy material. Preliminary PO results for the dihedral corner reflector suggest that, in addition to being much faster computationally, this model may be more accurate than existing moment method (MM) models. An improved Physical Optics (PO)/Equivalent Currents model for modeling the Radar Cross Section (RCS) of both square and triangular, perfectly conducting, trihedral corner reflectors is presented. The new model uses the PO approximation at each reflection for the first- and second-order reflection terms. For the third-order reflection terms, a Geometrical Optics (GO) approximation is used for the first reflection; and PO approximations are used for the remaining reflections. The previously reported model used GO for all reflections except the terminating reflection. Using PO for most of the reflections results in a computationally slower model because many integrations must be performed numerically, but the advantage is that the predicted RCS using the new model is much more accurate. Comparisons between the two PO models, Finite-Difference Time-Domain (FDTD) and experimental data are presented for validation of the new model.
Service Learning In Physics: The Consultant Model
NASA Astrophysics Data System (ADS)
Guerra, David
2005-04-01
Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.
Effects of preprocessing Landsat MSS data on derived features
NASA Technical Reports Server (NTRS)
Parris, T. M.; Cicone, R. C.
1983-01-01
Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.
Hauptmann, C; Roulet, J-C; Niederhauser, J J; Döll, W; Kirlangic, M E; Lysyansky, B; Krachkovskyi, V; Bhatti, M A; Barnikol, U B; Sasse, L; Bührle, C P; Speckmann, E-J; Götz, M; Sturm, V; Freund, H-J; Schnell, U; Tass, P A
2009-12-01
In the past decade deep brain stimulation (DBS)-the application of electrical stimulation to specific target structures via implanted depth electrodes-has become the standard treatment for medically refractory Parkinson's disease and essential tremor. These diseases are characterized by pathological synchronized neuronal activity in particular brain areas. We present an external trial DBS device capable of administering effectively desynchronizing stimulation techniques developed with methods from nonlinear dynamics and statistical physics according to a model-based approach. These techniques exploit either stochastic phase resetting principles or complex delayed-feedback mechanisms. We explain how these methods are implemented into a safe and user-friendly device.
NASA Technical Reports Server (NTRS)
Lee, J.
1994-01-01
A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.
Comparison of Conceptual and Neural Network Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Vidyarthi, V. K.; Jain, A.
2014-12-01
Rainfall-runoff (RR) model is a key component of any water resource application. There are two types of techniques usually employed for RR modeling: physics based and data-driven techniques. Although the physics based models have been used for operational purposes for a very long time, they provide only reasonable accuracy in modeling and forecasting. On the other hand, the Artificial Neural Networks (ANNs) have been reported to provide superior modeling performance; however, they have not been acceptable by practitioners, decision makers and water resources engineers as operational tools. The ANNs one of the data driven techniques, became popular for efficient modeling of the complex natural systems in the last couple of decades. In this paper, the comparative results for conceptual and ANN models in RR modeling are presented. The conceptual models were developed by the use of rainfall-runoff library (RRL) and genetic algorithm (GA) was used for calibration of these models. Feed-forward neural network model structure trained by Levenberg-Marquardt (LM) training algorithm has been adopted here to develop all the ANN models. The daily rainfall, runoff and various climatic data derived from Bird creek basin, Oklahoma, USA were employed to develop all the models included here. Daily potential evapotranspiration (PET), which was used in conceptual model development, was calculated by the use of Penman equation. The input variables were selected on the basis of correlation analysis. The performance evaluation statistics such as average absolute relative error (AARE), Pearson's correlation coefficient (R) and threshold statistics (TS) were used for assessing the performance of all the models developed here. The results obtained in this study show that the ANN models outperform the conventional conceptual models due to their ability to learn the non-linearity and complexity inherent in data of rainfall-runoff process in a more efficient manner. There is a strong need to carry out such studies to prove the superiority of ANN models over conventional methods in an attempt to make them acceptable by water resources community responsible for the operation of water resources systems.
Neural network uncertainty assessment using Bayesian statistics: a remote sensing application
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.
2004-01-01
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.
Didarloo, Alireza; Shojaeizadeh, Davoud; Ardebili, Hassan Eftekhar; Niknami, Shamsaddin; Hajizadeh, Ebrahim; Alizadeh, Mohammad
2011-10-01
Findings of most studies indicate that the only way to control diabetes and prevent its debilitating effects is through the continuous performance of self-care behaviors. Physical activity is a non-pharmacological method of diabetes treatment and because of its positive effects on diabetic patients, it is being increasingly considered by researchers and practitioners. This study aimed at determining factors influencing physical activity among diabetic women in Iran, using the extended theory of reasoned action in Iran. A sample of 352 women with type 2 diabetes, referring to a Diabetes Clinic in Khoy, Iran, participated in the study. Appropriate instruments were designed to measure the desired variables (knowledge of diabetes, personal beliefs, subjective norms, perceived self-efficacy, behavioral intention and physical activity behavior). The reliability and validity of the instruments were examined and approved. Statistical analyses of the study were conducted by inferential statistical techniques (independent t-test, correlations and regressions) using the SPSS package. The findings of this investigation indicated that among the constructs of the model, self efficacy was the strongest predictor of intentions among women with type 2 diabetes and both directly and indirectly affected physical activity. In addition to self efficacy, diabetic patients' physical activity also was influenced by other variables of the model and sociodemographic factors. Our findings suggest that the high ability of the theory of reasoned action extended by self-efficacy in forecasting and explaining physical activity can be a base for educational intervention. Educational interventions based on the proposed model are necessary for improving diabetics' physical activity behavior and controlling disease.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
NASA Astrophysics Data System (ADS)
Dufoyer, A.; Lecoq, N.; Massei, N.; Marechal, J. C.
2017-12-01
Physics-based modeling of karst systems remains almost impossible without enough accurate information about the inner physical characteristics. Usually, the only available hydrodynamic information is the flow rate at the karst outlet. Numerous works in the past decades have used and proven the usefulness of time-series analysis and spectral techniques applied to spring flow, precipitations or even physico-chemical parameters, for interpreting karst hydrological functioning. However, identifying or interpreting the karst systems physical features that control statistical or spectral characteristics of spring flow variations is still challenging, not to say sometimes controversial. The main objective of this work is to determine how the statistical and spectral characteristics of the hydrodynamic signal at karst springs can be related to inner physical and hydraulic properties. In order to address this issue, we undertake an empirical approach based on the use of both distributed and physics-based models, and on synthetic systems responses. The first step of the research is to conduct a sensitivity analysis of time-series/spectral methods to karst hydraulic and physical properties. For this purpose, forward modeling of flow through several simple, constrained and synthetic cases in response to precipitations is undertaken. It allows us to quantify how the statistical and spectral characteristics of flow at the outlet are sensitive to changes (i) in conduit geometries, and (ii) in hydraulic parameters of the system (matrix/conduit exchange rate, matrix hydraulic conductivity and storativity). The flow differential equations resolved by MARTHE, a computer code developed by the BRGM, allows karst conduits modeling. From signal processing on simulated spring responses, we hope to determine if specific frequencies are always modified, thanks to Fourier series and multi-resolution analysis. We also hope to quantify which parameters are the most variable with auto-correlation analysis: first results seem to show higher variations due to conduit conductivity than the ones due to matrix/conduit exchange rate. Future steps will be using another computer code, based on double-continuum approach and allowing turbulent conduit flow, and modeling a natural system.
A UML-based ontology for describing hospital information system architectures.
Winter, A; Brigl, B; Wendt, T
2001-01-01
To control the heterogeneity inherent to hospital information systems the information management needs appropriate hospital information systems modeling methods or techniques. This paper shows that, for several reasons, available modeling approaches are not able to answer relevant questions of information management. To overcome this major deficiency we offer an UML-based ontology for describing hospital information systems architectures. This ontology views at three layers: the domain layer, the logical tool layer, and the physical tool layer, and defines the relevant components. The relations between these components, especially between components of different layers make the answering of our information management questions possible.
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN
NASA Astrophysics Data System (ADS)
Schneider, Evan; Impey, C.
2012-01-01
We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.
Utilization of remote sensing techniques for the quantification of fire behavior in two pine stands
Eric V. Mueller; Nicholas Skowronski; Kenneth Clark; Michael Gallagher; Robert Kremens; Jan C. Thomas; Mohamad El Houssami; Alexander Filkov; Rory M. Hadden; William Mell; Albert Simeoni
2017-01-01
Quantification of field-scale fire behavior is necessary to improve the current scientific understanding of wildland fires and to develop and test relevant, physics-based models. In particular, detailed descriptions of individual fires are required, for which the available literature is limited. In this work, two such field-scale experiments, carried out in pine stands...
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
Learning Physics through Project-Based Learning Game Techniques
ERIC Educational Resources Information Center
Baran, Medine; Maskan, Abdulkadir; Yasar, Seyma
2018-01-01
The aim of the present study, in which Project and game techniques are used together, is to examine the impact of project-based learning games on students' physics achievement. Participants of the study consist of 34 9th grade students (N = 34). The data were collected using achievement tests and a questionnaire. Throughout the applications, the…
2006-09-27
Information Sciences Department, JHU/Applied Physics Laboratory, 12000 Johns Hopkins Road., Laurel, Maryland. 22104 ( PHB ) to meet the QoS requirements of...applications, e.g., (Keshav, 1997). However, to date, no work ex- ists to design and investigate PHB algorithms which simultaneously deliver QoS to...techniques to handle P&P requirements and rely upon standard, well studied QoS PHB , e.g., Weighted Round Robin, Class-Based Fair Queuing, etc., for han
Recent Advances in Ionospheric Modeling Using the USU GAIM Data Assimilation Models
NASA Astrophysics Data System (ADS)
Scherliess, L.; Thompson, D. C.; Schunk, R. W.
2009-12-01
The ionospheric plasma distribution at low and mid latitudes has been shown to display both a background state (climatology) and a disturbed state (weather). Ionospheric climatology has been successfully modeled, but ionospheric weather has been much more difficult to model because the ionosphere can vary significantly on an hour-by-hour basis. Unfortunately, ionospheric weather can have detrimental effects on several human activities and systems, including high-frequency communications, over-the-horizon radars, and survey and navigation systems using Global Positioning System (GPS) satellites. As shown by meteorologists and oceanographers, the most reliable weather models are physics-based, data-driven models that use Kalman filter or other data assimilation techniques. Since the state of a medium (ocean, lower atmosphere, ionosphere) is driven by complex and frequently nonlinear internal and external processes, it is not possible to accurately specify all of the drivers and initial conditions of the medium. Therefore physics-based models alone cannot provide reliable specifications and forecasts. In an effort to better understand the ionosphere and to mitigate its adverse effects on military and civilian operations, specification and forecast models are being developed that use state-of-the-art data assimilation techniques. Over the past decade, Utah State University (USU) has developed two data assimilation models for the ionosphere as part of the USU Global Assimilation of Ionospheric Measurements (GAIM) program and one of these models has been implemented at the Air Force Weather Agency for operational use. The USU-GAIM models are also being used for scientific studies, and this should lead to a dramatic advance in our understanding of ionospheric physics; similar to what occurred in meteorology and oceanography after the introduction of data assimilation models in those fields. Both USU-GAIM models are capable of assimilating data from a variety of data sources, including in situ electron densities from satellites, bottomside electron density profiles from ionosondes, total electron content (TEC) measurements between ground receivers and the GPS satellites, occultation data from satellite constellations, and ultraviolet emissions from the ionosphere measured by satellites. We will present the current status of the model development and discuss the employed data assimilation technique. Recent examples of the ionosphere specifications obtained from our model runs will be presented with an emphasis on the ionospheric plasma distribution during the current low solar activity conditions. Various comparisons with independent data will also be shown in an effort to validate the models.
NASA Astrophysics Data System (ADS)
Zlotnik, Sergio
2017-04-01
Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.
Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper
2017-01-23
Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.
Neural and Neural Gray-Box Modeling for Entry Temperature Prediction in a Hot Strip Mill
NASA Astrophysics Data System (ADS)
Barrios, José Angel; Torres-Alvarado, Miguel; Cavazos, Alberto; Leduc, Luis
2011-10-01
In hot strip mills, initial controller set points have to be calculated before the steel bar enters the mill. Calculations rely on the good knowledge of rolling variables. Measurements are available only after the bar has entered the mill, and therefore they have to be estimated. Estimation of process variables, particularly that of temperature, is of crucial importance for the bar front section to fulfill quality requirements, and the same must be performed in the shortest possible time to preserve heat. Currently, temperature estimation is performed by physical modeling; however, it is highly affected by measurement uncertainties, variations in the incoming bar conditions, and final product changes. In order to overcome these problems, artificial intelligence techniques such as artificial neural networks and fuzzy logic have been proposed. In this article, neural network-based systems, including neural-based Gray-Box models, are applied to estimate scale breaker entry temperature, given its importance, and their performance is compared to that of the physical model used in plant. Several neural systems and several neural-based Gray-Box models are designed and tested with real data. Taking advantage of the flexibility of neural networks for input incorporation, several factors which are believed to have influence on the process are also tested. The systems proposed in this study were proven to have better performance indexes and hence better prediction capabilities than the physical models currently used in plant.
NASA Technical Reports Server (NTRS)
Distefano, S.; Rameshan, R.; Fitzgerald, D. J.
1991-01-01
Amorphous iron and titanium-based alloys containing various amounts of chromium, phosphorus, and boron exhibit high corrosion resistance. Some physical properties of Fe and Ti-based metallic alloy films deposited on a glass substrate by a dc-magnetron sputtering technique are reported. The films were characterized using differential scanning calorimetry, stress analysis, SEM, XRD, SIMS, electron microprobe, and potentiodynamic polarization techniques.
Mathematical modeling of molecular diffusion through mucus
Cu, Yen; Saltzman, W. Mark
2008-01-01
The rate of molecular transport through the mucus gel can be an important determinant of efficacy for therapeutic agents delivered by oral, intranasal, intravaginal/rectal, and intraocular routes. Transport through mucus can be described by mathematical models based on principles of physical chemistry and known characteristics of the mucus gel, its constituents, and of the drug itself. In this paper, we review mathematical models of molecular diffusion in mucus, as well as the techniques commonly used to measure diffusion of solutes in the mucus gel, mucus gel mimics, and mucosal epithelia. PMID:19135488
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Social cognitive perspective of gender disparities in undergraduate physics
NASA Astrophysics Data System (ADS)
Kelly, Angela M.
2016-12-01
[This paper is part of the Focused Collection on Gender in Physics.] This article synthesizes sociopsychological theories and empirical research to establish a framework for exploring causal pathways and targeted interventions for the low representation of women in post-secondary physics. The rationale for this article is based upon disproportionate representation among undergraduate physics majors in the United States; women earned only 19.7% of physics undergraduate degrees in 2012. This disparity has been attributed to a variety of factors, including unwelcoming classroom atmospheres, low confidence and self-efficacy, and few female role models in physics academic communities. Recent empirical studies have suggested gender disparities in physics and related STEM fields may be more amenable to social cognitive interventions than previously thought. Social psychologists have found that women improved physics self-concept when adopting a malleable view of intelligence, when they received support and encouragement from family and teachers, and when they experienced interactive learning techniques in communal environments. By exploring research-based evidence for strategies to support women in physics, precollege and university faculty and administrators may apply social cognitive constructs to improve the representation of women in the field.
NASA Astrophysics Data System (ADS)
Hapca, Simona
2015-04-01
Many soil properties and functions emerge from interactions of physical, chemical and biological processes at microscopic scales, which can be understood only by integrating techniques that traditionally are developed within separate disciplines. While recent advances in imaging techniques, such as X-ray computed tomography (X-ray CT), offer the possibility to reconstruct the 3D physical structure at fine resolutions, for the distribution of chemicals in soil, existing methods, based on scanning electron microscope (SEM) and energy dispersive X-ray detection (EDX), allow for characterization of the chemical composition only on 2D surfaces. At present, direct 3D measurement techniques are still lacking, sequential sectioning of soils, followed by 2D mapping of chemical elements and interpolation to 3D, being an alternative which is explored in this study. Specifically, we develop an integrated experimental and theoretical framework which combines 3D X-ray CT imaging technique with 2D SEM-EDX and use spatial statistics methods to map the chemical composition of soil in 3D. The procedure involves three stages 1) scanning a resin impregnated soil cube by X-ray CT, followed by precision cutting to produce parallel thin slices, the surfaces of which are scanned by SEM-EDX, 2) alignment of the 2D chemical maps within the internal 3D structure of the soil cube, and 3) development, of spatial statistics methods to predict the chemical composition of 3D soil based on the observed 2D chemical and 3D physical data. Specifically, three statistical models consisting of a regression tree, a regression tree kriging and cokriging model were used to predict the 3D spatial distribution of carbon, silicon, iron and oxygen in soil, these chemical elements showing a good spatial agreement between the X-ray grayscale intensities and the corresponding 2D SEM-EDX data. Due to the spatial correlation between the physical and chemical data, the regression-tree model showed a great potential in predicting chemical composition in particular for iron, which is generally sparsely distributed in soil. For carbon, silicon and oxygen, which are more densely distributed, the additional kriging of the regression tree residuals improved significantly the prediction, whereas prediction based on co-kriging was less consistent across replicates, underperforming regression-tree kriging. The present study shows a great potential in integrating geo-statistical methods with imaging techniques to unveil the 3D chemical structure of soil at very fine scales, the framework being suitable to be further applied to other types of imaging data such as images of biological thin sections for characterization of microbial distribution. Key words: X-ray CT, SEM-EDX, segmentation techniques, spatial correlation, 3D soil images, 2D chemical maps.
Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.
2015-01-01
We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397
NASA Technical Reports Server (NTRS)
Yueh, Simon H.
2004-01-01
Active and passive microwave remote sensing techniques have been investigated for the remote sensing of ocean surface wind and salinity. We revised an ocean surface spectrum using the CMOD-5 geophysical model function (GMF) for the European Remote Sensing (ERS) C-band scatterometer and the Ku-band GMF for the NASA SeaWinds scatterometer. The predictions of microwave brightness temperatures from this model agree well with satellite, aircraft and tower-based microwave radiometer data. This suggests that the impact of surface roughness on microwave brightness temperatures and radar scattering coefficients of sea surfaces can be consistently characterized by a roughness spectrum, providing physical basis for using combined active and passive remote sensing techniques for ocean surface wind and salinity remote sensing.
Intelligent fault management for the Space Station active thermal control system
NASA Technical Reports Server (NTRS)
Hill, Tim; Faltisco, Robert M.
1992-01-01
The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.
NASA Technical Reports Server (NTRS)
Goodrich, Charles C.
1993-01-01
The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
NASA Technical Reports Server (NTRS)
Moin, Parviz; Spalart, Philippe R.
1987-01-01
The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Owen, Steven J.; Abdeljawad, Fadi F.
In order to better incorporate microstructures in continuum scale models, we use a novel finite element (FE) meshing technique to generate three-dimensional polycrystalline aggregates from a phase field grain growth model of grain microstructures. The proposed meshing technique creates hexahedral FE meshes that capture smooth interfaces between adjacent grains. Three dimensional realizations of grain microstructures from the phase field model are used in crystal plasticity-finite element (CP-FE) simulations of polycrystalline a -iron. We show that the interface conformal meshes significantly reduce artificial stress localizations in voxelated meshes that exhibit the so-called "wedding cake" interfaces. This framework provides a direct linkmore » between two mesoscale models - phase field and crystal plasticity - and for the first time allows mechanics simulations of polycrystalline materials using three-dimensional hexahedral finite element meshes with realistic topological features.« less
Prakash, Punit; Salgaonkar, Vasant A.; Diederich, Chris J.
2014-01-01
Endoluminal and catheter-based ultrasound applicators are currently under development and are in clinical use for minimally invasive hyperthermia and thermal ablation of various tissue targets. Computational models play a critical role in in device design and optimization, assessment of therapeutic feasibility and safety, devising treatment monitoring and feedback control strategies, and performing patient-specific treatment planning with this technology. The critical aspects of theoretical modeling, applied specifically to endoluminal and interstitial ultrasound thermotherapy, are reviewed. Principles and practical techniques for modeling acoustic energy deposition, bioheat transfer, thermal tissue damage, and dynamic changes in the physical and physiological state of tissue are reviewed. The integration of these models and applications of simulation techniques in identification of device design parameters, development of real time feedback-control platforms, assessing the quality and safety of treatment delivery strategies, and optimization of inverse treatment plans are presented. PMID:23738697
Simulation of 100-300 GHz solid-state harmonic sources
NASA Technical Reports Server (NTRS)
Zybura, Michael F.; Jones, J. Robert; Jones, Stephen H.; Tait, Gregory B.
1995-01-01
Accurate and efficient simulations of the large-signal time-dependent characteristics of second-harmonic Transferred Electron Oscillators (TEO's) and Heterostructure Barrier Varactor (HBV) frequency triplers have been obtained. This is accomplished by using a novel and efficient harmonic-balance circuit analysis technique which facilitates the integration of physics-based hydrodynamic device simulators. The integrated hydrodynamic device/harmonic-balance circuit simulators allow TEO and HBV circuits to be co-designed from both a device and a circuit point of view. Comparisons have been made with published experimental data for both TEO's and HBV's. For TEO's, excellent correlation has been obtained at 140 GHz and 188 GHz in second-harmonic operation. Excellent correlation has also been obtained for HBV frequency triplers operating near 200 GHz. For HBV's, both a lumped quasi-static equivalent circuit model and the hydrodynamic device simulator have been linked to the harmonic-balance circuit simulator. This comparison illustrates the importance of representing active devices with physics-based numerical device models rather than analytical device models.
Spectrum simulation in DTSA-II.
Ritchie, Nicholas W M
2009-10-01
Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.
IPA (v1): a framework for agent-based modelling of soil water movement
NASA Astrophysics Data System (ADS)
Mewes, Benjamin; Schumann, Andreas H.
2018-06-01
In the last decade, agent-based modelling (ABM) became a popular modelling technique in social sciences, medicine, biology, and ecology. ABM was designed to simulate systems that are highly dynamic and sensitive to small variations in their composition and their state. As hydrological systems, and natural systems in general, often show dynamic and non-linear behaviour, ABM can be an appropriate way to model these systems. Nevertheless, only a few studies have utilized the ABM method for process-based modelling in hydrology. The percolation of water through the unsaturated soil is highly responsive to the current state of the soil system; small variations in composition lead to major changes in the transport system. Hence, we present a new approach for modelling the movement of water through a soil column: autonomous water agents that transport water through the soil while interacting with their environment as well as with other agents under physical laws.
NASA Astrophysics Data System (ADS)
McKane, Alan
2003-12-01
This is a book about the modelling of complex systems and, unlike many books on this subject, concentrates on the discussion of specific systems and gives practical methods for modelling and simulating them. This is not to say that the author does not devote space to the general philosophy and definition of complex systems and agent-based modelling, but the emphasis is definitely on the development of concrete methods for analysing them. This is, in my view, to be welcomed and I thoroughly recommend the book, especially to those with a theoretical physics background who will be very much at home with the language and techniques which are used. The author has developed a formalism for understanding complex systems which is based on the Langevin approach to the study of Brownian motion. This is a mesoscopic description; details of the interactions between the Brownian particle and the molecules of the surrounding fluid are replaced by a randomly fluctuating force. Thus all microscopic detail is replaced by a coarse-grained description which encapsulates the essence of the interactions at the finer level of description. In a similar way, the influences on Brownian agents in a multi-agent system are replaced by stochastic influences which sum up the effects of these interactions on a finer scale. Unlike Brownian particles, Brownian agents are not structureless particles, but instead have some internal states so that, for instance, they may react to changes in the environment or to the presence of other agents. Most of the book is concerned with developing the idea of Brownian agents using the techniques of statistical physics. This development parallels that for Brownian particles in physics, but the author then goes on to apply the technique to problems in biology, economics and the social sciences. This is a clear and well-written book which is a useful addition to the literature on complex systems. It will be interesting to see if the use of Brownian agents becomes a standard tool in the study of complex systems in the future.
NASA Astrophysics Data System (ADS)
McGinty, A. B.
1982-04-01
Contents: The Air Force Geophysics Laboratory; Aeronomy Division--Upper Atmosphere Composition, Middle Atmosphere Effects, Atmospheric UV Radiation, Satellite Accelerometer Density Measurement, Theoretical Density Studies, Chemical Transport Models, Turbulence and Forcing Functions, Atmospheric Ion Chemistry, Energy Budget Campaign, Kwajalein Reference Atmospheres, 1979, Satellite Studies of the Neutral Atmosphere, Satellite Studies of the Ionosphere, Aerospace Instrumentation Division--Sounding Rocket Program, Satellite Support, Rocket and Satellite Instrumentation; Space Physics Division--Solar Research, Solar Radio Research, Environmental Effects on Space Systems, Solar Proton Event Studies, Defense Meteorological Satellite Program, Ionospheric Effects Research, Spacecraft Charging Technology; Meteorology Division--Cloud Physics, Ground-Based Remote-Sensing Techniques, Mesoscale Observing and Forecasting, Design Climatology, Aircraft Icing Program, Atmospheric Dynamics; Terrestrial Sciences Division--Geodesy and Gravity, Geokinetics; Optical Physics Division--Atmospheric Transmission, Remote Sensing, INfrared Background; and Appendices.
Analysis of Ground Motion from An Underground Chemical Explosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, Arben; Mellors, Robert J.; Walter, William R.
Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less
Analysis of Ground Motion from An Underground Chemical Explosion
Pitarka, Arben; Mellors, Robert J.; Walter, William R.; ...
2015-09-08
Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-09-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-01-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399
Bayesian Approaches for Model and Multi-mission Satellites Data Fusion
NASA Astrophysics Data System (ADS)
Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.
2017-12-01
Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-01-01
Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148
Tipping point analysis of atmospheric oxygen concentration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livina, V. N.; Forbes, A. B.; Vaz Martins, T. M.
2015-03-15
We apply tipping point analysis to nine observational oxygen concentration records around the globe, analyse their dynamics and perform projections under possible future scenarios, leading to oxygen deficiency in the atmosphere. The analysis is based on statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the observed data using Bayesian and wavelet techniques.
Plan View Pattern Control for Steel Plates through Constrained Locally Weighted Regression
NASA Astrophysics Data System (ADS)
Shigemori, Hiroyasu; Nambu, Koji; Nagao, Ryo; Araki, Tadashi; Mizushima, Narihito; Kano, Manabu; Hasebe, Shinji
A technique for performing parameter identification in a locally weighted regression model using foresight information on the physical properties of the object of interest as constraints was proposed. This method was applied to plan view pattern control of steel plates, and a reduction of shape nonconformity (crop) at the plate head end was confirmed by computer simulation based on real operation data.
11th International Conference of Radiation Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-07-18
Topics discussed in the conference included the following: Radiation Physics, Radiation Chemistry and modelling--Radiation physics and dosimetry; Electron transfer in biological media; Radiation chemistry; Biophysical and biochemical modelling; Mechanisms of DNA damage; Assays of DNA damage; Energy deposition in micro volumes; Photo-effects; Special techniques and technologies; Oxidative damage. Molecular and cellular effects-- Photobiology; Cell cycle effects; DNA damage: Strand breaks; DNA damage: Bases; DNA damage Non-targeted; DNA damage: other; Chromosome aberrations: clonal; Chromosomal aberrations: non-clonal; Interactions: Heat/Radiation/Drugs; Biochemical effects; Protein expression; Gene induction; Co-operative effects; ``Bystander'' effects; Oxidative stress effects; Recovery from radiation damage. DNA damage and repair -- DNAmore » repair genes; DNA repair deficient diseases; DNA repair enzymology; Epigenetic effects on repair; and Ataxia and ATM.« less
Search for new physics with a dijet plus missing E(T) signature in pp collisions at √s=1.96 TeV.
Aaltonen, T; Adelman, J; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; d'Errico, M; Deviveiros, P-O; Di Canto, A; di Giovanni, G P; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Dube, S; Ebina, K; Elagin, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Hughes, R E; Hurwitz, M; Husemann, U; Hussein, M; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Lovas, L; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramanov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Rutherford, B; Saarikko, H; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Simonenko, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Suh, J S; Sukhanov, A; Suslov, I; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wolfe, H; Wright, T; Wu, X; Würthwein, F; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhang, X; Zheng, Y; Zucchelli, S
2010-09-24
We present results of a signature-based search for new physics using a dijet plus missing transverse energy (E(T)) data sample collected in 2 fb⁻¹ of pp collisions at √s=1.96 TeV with the CDF II detector at the Fermilab Tevatron. We observe no significant event excess with respect to the standard model prediction and extract a 95% C.L. upper limit on the cross section times acceptance for a potential contribution from a nonstandard model process. The search is made by using novel, data-driven techniques for estimating backgrounds that are applicable to first searches at the LHC.
NASA Astrophysics Data System (ADS)
Riandry, M. A.; Ismet, I.; Akhsan, H.
2017-09-01
This study aims to produce a valid and practical statistical physics course handout on distribution function materials based on STEM. Rowntree development model is used to produce this handout. The model consists of three stages: planning, development and evaluation stages. In this study, the evaluation stage used Tessmer formative evaluation. It consists of 5 stages: self-evaluation, expert review, one-to-one evaluation, small group evaluation and field test stages. However, the handout is limited to be tested on validity and practicality aspects, so the field test stage is not implemented. The data collection technique used walkthroughs and questionnaires. Subjects of this study are students of 6th and 8th semester of academic year 2016/2017 Physics Education Study Program of Sriwijaya University. The average result of expert review is 87.31% (very valid category). One-to-one evaluation obtained the average result is 89.42%. The result of small group evaluation is 85.92%. From one-to-one and small group evaluation stages, averagestudent response to this handout is 87,67% (very practical category). Based on the results of the study, it can be concluded that the handout is valid and practical.
NASA Astrophysics Data System (ADS)
Tallapragada, V.
2017-12-01
NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.
The use of multiple models in case-based diagnosis
NASA Technical Reports Server (NTRS)
Karamouzis, Stamos T.; Feyock, Stefan
1993-01-01
The work described in this paper has as its goal the integration of a number of reasoning techniques into a unified intelligent information system that will aid flight crews with malfunction diagnosis and prognostication. One of these approaches involves using the extensive archive of information contained in aircraft accident reports along with various models of the aircraft as the basis for case-based reasoning about malfunctions. Case-based reasoning draws conclusions on the basis of similarities between the present situation and prior experience. We maintain that the ability of a CBR program to reason about physical systems is significantly enhanced by the addition to the CBR program of various models. This paper describes the diagnostic concepts implemented in a prototypical case based reasoner that operates in the domain of in-flight fault diagnosis, the various models used in conjunction with the reasoner's CBR component, and results from a preliminary evaluation.
NASA Technical Reports Server (NTRS)
Kim, Young-Joon; Pak, Kyung S.; Dunbar, R. Scott; Hsiao, S. Vincent; Callahan, Philip S.
2000-01-01
Planetary boundary layer (PBL) models are utilized to enhance directional ambiguity removal skill in scatterometer data processing. The ambiguity in wind direction retrieved from scatterometer measurements is removed with the aid of physical directional information obtained from PBL models. This technique is based on the observation that sea level pressure is scalar and its field is more coherent than the corresponding wind. An initial wind field obtained from the scatterometer measurements is used to derive a pressure field with a PBL model. After filtering small-scale noise in the derived pressure field, a wind field is generated with an inverted PBL model. This derived wind information is then used to remove wind vector ambiguities in the scatterometer data. It is found that the ambiguity removal skill can be improved when the new technique is used properly in conjunction with the median filter being used for scatterometer wind dealiasing at JPL. The new technique is applied to regions of cyclone systems which are important for accurate weather prediction but where the errors of ambiguity removal are often large.
Didarloo, Alireza; Ardebili, Hassan Eftekhar; Niknami, Shamsaddin; Hajizadeh, Ebrahim; Alizadeh, Mohammad
2011-01-01
Background Findings of most studies indicate that the only way to control diabetes and prevent its debilitating effects is through the continuous performance of self-care behaviors. Physical activity is a non-pharmacological method of diabetes treatment and because of its positive effects on diabetic patients, it is being increasingly considered by researchers and practitioners. This study aimed at determining factors influencing physical activity among diabetic women in Iran, using the extended theory of reasoned action in Iran. Methods A sample of 352 women with type 2 diabetes, referring to a Diabetes Clinic in Khoy, Iran, participated in the study. Appropriate instruments were designed to measure the desired variables (knowledge of diabetes, personal beliefs, subjective norms, perceived self-efficacy, behavioral intention and physical activity behavior). The reliability and validity of the instruments were examined and approved. Statistical analyses of the study were conducted by inferential statistical techniques (independent t-test, correlations and regressions) using the SPSS package. Results The findings of this investigation indicated that among the constructs of the model, self efficacy was the strongest predictor of intentions among women with type 2 diabetes and both directly and indirectly affected physical activity. In addition to self efficacy, diabetic patients' physical activity also was influenced by other variables of the model and sociodemographic factors. Conclusion Our findings suggest that the high ability of the theory of reasoned action extended by self-efficacy in forecasting and explaining physical activity can be a base for educational intervention. Educational interventions based on the proposed model are necessary for improving diabetics' physical activity behavior and controlling disease. PMID:22111043
Modelling of physical influences in sea level records for vertical crustal movement detection
NASA Technical Reports Server (NTRS)
Anderson, E. G.
1978-01-01
Attempts to specify and evaluate such physical influences are reviewed with the intention of identifying problem areas and promising approaches. An example of linear modelling based on air/water temperatures, atmospheric pressure, river discharges, geostrophic and/or local wind velocities, and including forced period terms to allow for the long period tides and Chandlerian polar motion is evaluated and applied to monthly mean sea levels recorded in Atlantic Canada. Refinement of the model to admit phase lag in the response to some of the driving phenomena is demonstrated. Spectral analysis of the residuals is employed to assess the model performance. The results and associated statistical parameters are discussed with emphasis on elucidating the sensitivity of the technique for detection of local episodic and secular vertical crustal movements, the problem areas most critical to the type of approach, and possible further developments.
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
Applications of molecular modeling in coal research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, G.A.; Faulon, J.L.
Over the past several years, molecular modeling has been applied to study various characteristics of coal molecular structures. Powerful workstations coupled with molecular force-field-based software packages have been used to study coal and coal-related molecules. Early work involved determination of the minimum-energy three-dimensional conformations of various published coal structures (Given, Wiser, Solomon and Shinn), and the dominant role of van der Waals and hydrogen bonding forces in defining the energy-minimized structures. These studies have been extended to explore various physical properties of coal structures, including density, microporosity, surface area, and fractal dimension. Other studies have related structural characteristics to cross-linkmore » density and have explored small molecule interactions with coal. Finally, recent studies using a structural elucidation (molecular builder) technique have constructed statistically diverse coal structures based on quantitative and qualitative data on coal and its decomposition products. This technique is also being applied to study coalification processes based on postulated coalification chemistry.« less
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
NASA Astrophysics Data System (ADS)
Conrads, P. A.; Roehl, E. A.
2010-12-01
Natural-resource managers face the difficult problem of controlling the interactions between hydrologic and man-made systems in ways that preserve resources while optimally meeting the needs of disparate stakeholders. Finding success depends on obtaining and employing detailed scientific knowledge about the cause-effect relations that govern the physics of these hydrologic systems. This knowledge is most credible when derived from large field-based datasets that encompass the wide range of variability in the parameters of interest. The means of converting data into knowledge of the hydrologic system often involves developing computer models that predict the consequences of alternative management practices to guide resource managers towards the best path forward. Complex hydrologic systems are typically modeled using computer programs that implement traditional, generalized, physical equations, which are calibrated to match the field data as closely as possible. This type of model commonly is limited in terms of demonstrable predictive accuracy, development time, and cost. The science of data mining presents a powerful complement to physics-based models. Data mining is a relatively new science that assists in converting large databases into knowledge and is uniquely able to leverage the real-time, multivariate data now being collected for hydrologic systems. In side-by-side comparisons with state-of-the-art physics-based hydrologic models, the authors have found data-mining solutions have been substantially more accurate, less time consuming to develop, and embeddable into spreadsheets and sophisticated decision support systems (DSS), making them easy to use by regulators and stakeholders. Three data-mining applications will be presented that demonstrate how data-mining techniques can be applied to existing environmental databases to address regional concerns of long-term consequences. In each case, data were transformed into information, and ultimately, into knowledge. In each case, DSSs were developed that facilitated the use of simulation models and analysis of model output to a broad range of end users with various technical abilities. When compared to other modeling projects of comparable scope and complexity, these DSSs were able to pass through needed technical reviews much more quickly. Unlike programs such as finite-element flow models, DSSs are by design open systems that are easy to use and readily disseminated directly to decision makers. The DSSs provide direct coupling of predictive models with the real-time databases that drive them, graphical user interfaces for point-and-click program control, and streaming displays of numerical and graphical results so that users can monitor the progress of long-term simulations. Customizations for specific problems include numerical optimization loops that invert predictive models; integrations with a three-dimensional finite-element flow model, GIS packages, and a plant ecology model; and color contouring of simulation output data.
PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments
NASA Astrophysics Data System (ADS)
Gaede, F.; Hegner, B.; Mato, P.
2017-10-01
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.
Neutron Reflectivity as a Tool for Physics-Based Studies of Model Bacterial Membranes.
Barker, Robert D; McKinley, Laura E; Titmuss, Simon
2016-01-01
The principles of neutron reflectivity and its application as a tool to provide structural information at the (sub-) molecular unit length scale from models for bacterial membranes are described. The model membranes can take the form of a monolayer for a single leaflet spread at the air/water interface, or bilayers of increasing complexity at the solid/liquid interface. Solid-supported bilayers constrain the bilayer to 2D but can be used to characterize interactions with antimicrobial peptides and benchmark high throughput lab-based techniques. Floating bilayers allow for membrane fluctuations, making the phase behaviour more representative of native membranes. Bilayers of varying levels of compositional accuracy can now be constructed, facilitating studies with aims that range from characterizing the fundamental physical interactions, through to the characterization of accurate mimetics for the inner and outer membranes of Gram-negative bacteria. Studies of the interactions of antimicrobial peptides with monolayer and bilayer models for the inner and outer membranes have revealed information about the molecular control of the outer membrane permeability, and the mode of interaction of antimicrobials with both inner and outer membranes.
NASA Astrophysics Data System (ADS)
Jin, Hao; Xu, Rui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2017-10-01
As to support the mission of Mars exploration in China, automated mission planning is required to enhance security and robustness of deep space probe. Deep space mission planning requires modeling of complex operations constraints and focus on the temporal state transitions of involved subsystems. Also, state transitions are ubiquitous in physical systems, but have been elusive for knowledge description. We introduce a modeling approach to cope with these difficulties that takes state transitions into consideration. The key technique we build on is the notion of extended states and state transition graphs. Furthermore, a heuristics that based on state transition graphs is proposed to avoid redundant work. Finally, we run comprehensive experiments on selected domains and our techniques present an excellent performance.
ERIC Educational Resources Information Center
Saleh, Salmiza
2012-01-01
The aim of this study was to assess the effectiveness of Brain Based Teaching Approach in enhancing students' scientific understanding of Newtonian Physics in the context of Form Four Physics instruction. The technique was implemented based on the Brain Based Learning Principles developed by Caine & Caine (1991, 2003). This brain compatible…
NASA Astrophysics Data System (ADS)
Mitchell, C. N.; Rankov, N. R.; Bust, G. S.; Miller, E.; Gaussiran, T.; Calfas, R.; Doyle, J. D.; Teig, L. J.; Werth, J. L.; Dekine, I.
2017-07-01
Ionospheric data assimilation is a technique to evaluate the 3-D time varying distribution of electron density using a combination of a physics-based model and observations. A new ionospheric data assimilation method is introduced that has the capability to resolve traveling ionospheric disturbances (TIDs). TIDs are important because they cause strong delay and refraction to radio signals that are detrimental to the accuracy of high-frequency (HF) geolocation systems. The capability to accurately specify the ionosphere through data assimilation can correct systems for the error caused by the unknown ionospheric refraction. The new data assimilation method introduced here uses ionospheric models in combination with observations of HF signals from known transmitters. The assimilation methodology was tested by the ability to predict the incoming angles of HF signals from transmitters at a set of nonassimilated test locations. The technique is demonstrated and validated using observations collected during 2 days of a dedicated campaign of ionospheric measurements at White Sands Missile Range in New Mexico in January 2014. This is the first time that full HF ionospheric data assimilation using an ensemble run of a physics-based model of ionospheric TIDs has been demonstrated. The results show a significant improvement over HF angle-of-arrival prediction using an empirical model and also over the classic method of single-site location using an ionosonde close to the midpoint of the path. The assimilative approach is extendable to include other types of ionospheric measurements.
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
Forecasting Lightning Threat using Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.
2008-01-01
Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-12-12
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-01-01
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868
NASA Technical Reports Server (NTRS)
Weaver, David
2008-01-01
Effectively communicate qualitative and quantitative information orally and in writing. Explain the application of fundamental physical principles to various physical phenomena. Apply appropriate problem-solving techniques to practical and meaningful problems using graphical, mathematical, and written modeling tools. Work effectively in collaborative groups.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Wire Crimp Connectors Verification using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Perey, Daniel F.; Yost, William T.
2007-01-01
The development of a new ultrasonic measurement technique to quantitatively assess wire crimp connections is discussed. The amplitude change of a compressional ultrasonic wave propagating through the junction of a crimp connector and wire is shown to correlate with the results of a destructive pull test, which previously has been used to assess crimp wire junction quality. Various crimp junction pathologies (missing wire strands, incorrect wire gauge, incomplete wire insertion in connector) are ultrasonically tested, and their results are correlated with pull tests. Results show that the ultrasonic measurement technique consistently (as evidenced with pull-testing data) predicts good crimps when ultrasonic transmission is above a certain threshold amplitude level. A physics-based model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying the technique while wire crimps are installed is also presented.
NASA Astrophysics Data System (ADS)
Jozwiak, Zbigniew Boguslaw
1995-01-01
Ethylene is an important auto-catalytic plant growth hormone. Removal of ethylene from the atmosphere surrounding ethylene-sensitive horticultural products may be very beneficial, allowing an extended period of storage and preventing or delaying the induction of disorders. Various ethylene removal techniques have been studied and put into practice. One technique is based on using low pressure mercury ultraviolet lamps as a source of photochemical energy to initiate chemical reactions that destroy ethylene. Although previous research showed that ethylene disappeared in experiments with mercury ultraviolet lamps, the reactions were not described and the actual cause of ethylene disappearance remained unknown. Proposed causes for this disappearance were the direct action of ultraviolet rays on ethylene, reaction of ethylene with ozone (which is formed when air or gas containing molecular oxygen is exposed to radiation emitted by this type of lamp), or reactions with atomic oxygen leading to formation of ozone. The objective of the present study was to determine the set of physical and chemical actions leading to the disappearance of ethylene from artificial storage atmosphere under conditions of ultraviolet irradiation. The goal was achieved by developing a static chemical model based on the physical properties of a commercially available ultraviolet lamp, the photochemistry of gases, and the kinetics of chemical reactions. The model was used to perform computer simulations predicting time dependent concentrations of chemical species included in the model. Development of the model was accompanied by the design of a reaction chamber used for experimental verification. The model provided a good prediction of the general behavior of the species involved in the chemistry under consideration; however the model predicted lower than measured rate of ethylene disappearance. Some reasons for the model -experiment disagreement are radiation intensity averaging, the experimental technique, mass transfer in the chamber, and incompleteness of the set of chemical reactions included in the model. The work is concluded with guidelines for development of a more complex mathematical model that includes elements of mass transfer inside the reaction chamber, and uses a three dimensional approach to distribute radiation from the low pressure mercury ultraviolet tube.
Head-mounted active noise control system with virtual sensing technique
NASA Astrophysics Data System (ADS)
Miyazaki, Nobuhiro; Kajikawa, Yoshinobu
2015-03-01
In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.
NASA Astrophysics Data System (ADS)
Bettencourt, Luis; Kaiser, David
2004-03-01
Based on an a historically documented example of scientific discovery - Feynman diagrams as the main calculational tool of theoretical high energy Physics - we map the time evolution of the social network of early adopters through in the US, UK, Japan and the USSR. The spread of the technique for total number of users in each region is then modelled in terms of epidemic models, highlighting parallel and divergent aspects of this analogy. We also show that transient social arrangements develop as the idea is introduced and learned, which later disappear as the technique becomes common knowledge. Such early transient is characterized by abnormally low connectivity distribution powers and by high clustering. This interesting early non-equilibrium stage of network evolution is captured by a new dynamical model for network evolution, which coincides in its long time limit with familiar preferential aggregation dynamics.
Printing Space: Using 3D Printing of Digital Terrain Models in Geosciences Education and Research
ERIC Educational Resources Information Center
Horowitz, Seth S.; Schultz, Peter H.
2014-01-01
Data visualization is a core component of every scientific project; however, generation of physical models previously depended on expensive or labor-intensive molding, sculpting, or laser sintering techniques. Physical models have the advantage of providing not only visual but also tactile modes of inspection, thereby allowing easier visual…
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danielson, Gary R.; Augustenborg, Elsa C.; Beck, Andrew E.
2010-10-29
The IAEA is challenged with limited availability of human resources for inspection and data analysis while proliferation threats increase. PNNL has a variety of IT solutions and techniques (at varying levels of maturity and development) that take raw data closer to useful knowledge, thereby assisting with and standardizing the analytical processes. This paper highlights some PNNL tools and techniques which are applicable to the international safeguards community, including: • Intelligent in-situ triage of data prior to reliable transmission to an analysis center resulting in the transmission of smaller and more relevant data sets • Capture of expert knowledge in re-usablemore » search strings tailored to specific mission outcomes • Image based searching fused with text based searching • Use of gaming to discover unexpected proliferation scenarios • Process modeling (e.g. Physical Model) as the basis for an information integration portal, which links to data storage locations along with analyst annotations, categorizations, geographic data, search strings and visualization outputs.« less
Wire Crimp Termination Verification Using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Perey, Daniel F.; Cramer, K. Elliott; Yost, William T.
2007-01-01
The development of a new ultrasonic measurement technique to quantitatively assess wire crimp terminations is discussed. The amplitude change of a compressional ultrasonic wave propagating through the junction of a crimp termination and wire is shown to correlate with the results of a destructive pull test, which is a standard for assessing crimp wire junction quality. Various crimp junction pathologies such as undercrimping, missing wire strands, incomplete wire insertion, partial insulation removal, and incorrect wire gauge are ultrasonically tested, and their results are correlated with pull tests. Results show that the nondestructive ultrasonic measurement technique consistently (as evidenced with destructive testing) predicts good crimps when ultrasonic transmission is above a certain threshold amplitude level. A physics-based model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying this technique while wire crimps are installed is also presented. The instrument is based on a two-jaw type crimp tool suitable for butt-splice type connections. Finally, an approach for application to multipin indenter type crimps will be discussed.
A physics based approach to the pulse wave velocity prediction in compliant arterial segments.
Liberson, Alexander S; Lillie, Jeffrey S; Day, Steven W; Borkholder, David A
2016-10-03
Pulse wave velocity (PWV) quantification commonly serves as a highly robust prognostic parameter being used in a preventative cardiovascular therapy. Being dependent on arterial elastance, it can serve as a marker of cardiovascular risk. Since it is influenced by a blood pressure (BP), the pertaining theory can lay the foundation in developing a technique for noninvasive blood pressure measurement. Previous studies have reported application of PWV, measured noninvasively, for both the estimation of arterial compliance and blood pressure, based on simplified physical or statistical models. A new theoretical model for pulse wave propagation in a compliant arterial segment is presented within the framework of pseudo-elastic deformation of biological tissue undergoing finite deformation. An essential ingredient is the dependence of results on nonlinear aspects of the model: convective fluid phenomena, hyperelastic constitutive relation, large deformation and a longitudinal pre-stress load. An exact analytical solution for PWV is presented as a function of pressure, flow and pseudo-elastic orthotropic parameters. Results from our model are compared with published in-vivo PWV measurements under diverse physiological conditions. Contributions of each of the nonlinearities are analyzed. It was found that the totally nonlinear model achieves the best match with the experimental data. To retrieve individual vascular information of a patient, the inverse problem of hemodynamics is presented, calculating local orthotropic hyperelastic properties of the arterial wall. The proposed technique can be used for non-invasive assessment of arterial elastance, and blood pressure using direct measurement of PWV, with account of hyperelastic orthotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Del Gaudio, S.; Lancieri, M.; Hok, S.; Satriano, C.; Chartier, T.; Scotti, O.; Bernard, P.
2016-12-01
Predictions of realistic ground motion for potential future earthquakes are always an interesting task for seismologists and are also the main objective of seismic hazard assessment. While, on one hand, numerical simulations have become more and more accurate and several different techniques have been developed, on the other hand ground motion prediction equations (GMPEs) have become a powerful instrument (due to great improvement of seismic strong motion networks providing a large amount of data). Nevertheless GMPEs do not represent the whole variety of source processes and this can lead to incorrect estimates especially in the near fault conditions because of the lack of records of large earthquakes at short distances. In such cases, physics-based ground motion simulations can be a valid tool to complement prediction equations for scenario studies, provided that both source and propagation are accurately described. We present here a comparison between numerical simulations performed in near fault conditions using two different kinematic source models, which are based on different assumptions and parameterizations: the "k-2 model" and the "fractal model". Wave propagation is taken into account using hybrid Green's function (HGF), which consists in coupling numerical Green's function with an empirical Green's function (EGF) approach. The advantage of this technique is that it does not require a very detailed knowledge of the propagation medium, but requires availability of high quality records of small earthquakes in the target area. The first application we show is on L'Aquila 2009 M 6.3 earthquake, where the main event records provide a benchmark for the synthetic waveforms. Here we can clearly observe which are the limitations of these techniques and investigate which are the physical parameters that are effectively controlling the ground motion level. The second application is a blind test on Upper Rhine Graben (URG) where active faults producing micro seismic activity are very close to sites of interest needing a careful investigation of seismic hazard. Finally we will perform a probabilistic seismic hazard analysis (PSHA) for the URG using numerical simulations to define input ground motion for different scenarios and compare them with a classical probabilistic study based on GMPEs.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
Smith, Lee; Ucci, Marcella; Marmot, Alexi; Spinney, Richard; Laskowski, Marek; Sawyer, Alexia; Konstantatou, Marina; Hamer, Mark; Ambler, Gareth; Wardle, Jane; Fisher, Abigail
2013-11-12
Health benefits of regular participation in physical activity are well documented but population levels are low. Office layout, and in particular the number and location of office building destinations (eg, print and meeting rooms), may influence both walking time and characteristics of sitting time. No research to date has focused on the role that the layout of the indoor office environment plays in facilitating or inhibiting step counts and characteristics of sitting time. The primary aim of this study was to investigate associations between office layout and physical activity, as well as sitting time using objective measures. Active buildings is a unique collaboration between public health, built environment and computer science researchers. The study involves objective monitoring complemented by a larger questionnaire arm. UK office buildings will be selected based on a variety of features, including office floor area and number of occupants. Questionnaires will include items on standard demographics, well-being, physical activity behaviour and putative socioecological correlates of workplace physical activity. Based on survey responses, approximately 30 participants will be recruited from each building into the objective monitoring arm. Participants will wear accelerometers (to monitor physical activity and sitting inside and outside the office) and a novel tracking device will be placed in the office (to record participant location) for five consecutive days. Data will be analysed using regression analyses, as well as novel agent-based modelling techniques. The results of this study will be disseminated through peer-reviewed publications and scientific presentations. Ethical approval was obtained through the University College London Research Ethics Committee (Reference number 4400/001).
Generation of THz Wave with Orbital Angular Momentum by Graphene Patch Reflectarray
2015-07-01
potential to significantly increase spectral efficiency and channel capacity for wireless communication [1]. A few techniques have been reported to...plane wave. The graphene-based OAM generation is very promising for future applications in THz wireless communication . ACKNOWLEDGEMENT This work is... Dyadic Green’s functions and guided surface waves for a surface conductivity model of graphene,” Journal of Applied Physics, vol. 103, no. 6, pp
Principles and Foundations for Fractionated Networked Cyber-Physical Systems
2012-07-13
spectrum between autonomy to cooperation. Our distributed comput- ing model is based on distributed knowledge sharing, and makes very few assumptions but...over the computation without the need for explicit migration. Randomization techniques will make sure that enough di- versity is maintained to allow...small UAV testbed consisting of 10 inex- pensive quadcopters at SRI. Hard ware-wise, we added heat sinks to mitigate the impact of additional heat that
NASA Astrophysics Data System (ADS)
Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.
2016-09-01
Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Exploring Space Physics Concepts Using Simulation Results
NASA Astrophysics Data System (ADS)
Gross, N. A.
2008-05-01
The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the National Science Foundation, has the goal of developing a suite of integrated physics based computer models of the space environment that can follow the evolution of a space weather event from the Sun to the Earth. In addition to the research goals, CISM is also committed to training the next generation of space weather professionals who are imbued with a system view of space weather. This view should include an understanding of both helio-spheric and geo-space phenomena. To this end, CISM offers a yearly Space Weather Summer School targeted to first year graduate students, although advanced undergraduates and space weather professionals have also attended. This summer school uses a number of innovative pedagogical techniques including devoting each afternoon to a computer lab exercise that use results from research quality simulations and visualization techniques, along with ground based and satellite data to explore concepts introduced during the morning lectures. These labs are suitable for use in wide variety educational settings from formal classroom instruction to outreach programs. The goal of this poster is to outline the goals and content of the lab materials so that instructors may evaluate their potential use in the classroom or other settings.
Diffusion of a Sustainable Farming Technique in Sri Lanka: An Agent-Based Modeling Approach
NASA Astrophysics Data System (ADS)
Jacobi, J. H.; Gilligan, J. M.; Carrico, A. R.; Truelove, H. B.; Hornberger, G.
2012-12-01
We live in a changing world - anthropogenic climate change is disrupting historic climate patterns and social structures are shifting as large scale population growth and massive migrations place unprecedented strain on natural and social resources. Agriculture in many countries is affected by these changes in the social and natural environments. In Sri Lanka, rice farmers in the Mahaweli River watershed have seen increases in temperature and decreases in precipitation. In addition, a government led resettlement project has altered the demographics and social practices in villages throughout the watershed. These changes have the potential to impact rice yields in a country where self-sufficiency in rice production is a point of national pride. Studies of the climate can elucidate physical effects on rice production, while research on social behaviors can illuminate the influence of community dynamics on agricultural practices. Only an integrated approach, however, can capture the combined and interactive impacts of these global changes on Sri Lankan agricultural. As part of an interdisciplinary team, we present an agent-based modeling (ABM) approach to studying the effects of physical and social changes on farmers in Sri Lanka. In our research, the diffusion of a sustainable farming technique, the system of rice intensification (SRI), throughout a farming community is modeled to identify factors that either inhibit or promote the spread of a more sustainable approach to rice farming. Inputs into the ABM are both physical and social and include temperature, precipitation, the Palmer Drought Severity Index (PDSI), community trust, and social networks. Outputs from the ABM demonstrate the importance of meteorology and social structure on the diffusion of SRI throughout a farming community.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
ERIC Educational Resources Information Center
Sliva, Yekaterina
2014-01-01
The purpose of this study was to introduce an instructional technique for teaching complex tasks in physics, test its effectiveness and efficiency, and understand cognitive processes taking place in learners' minds while they are exposed to this technique. The study was based primarily on cognitive load theory (CLT). CLT determines the amount of…
A synthetic seismicity model for the Middle America Trench
NASA Technical Reports Server (NTRS)
Ward, Steven N.
1991-01-01
A novel iterative technique, based on the concept of fault segmentation and computed using 2D static dislocation theory, for building models of seismicity and fault interaction which are physically acceptable and geometrically and kinematically correct, is presented. The technique is applied in two steps to seismicity observed at the Middle America Trench. The first constructs generic models which randomly draw segment strengths and lengths from a 2D probability distribution. The second constructs predictive models in which segment lengths and strengths are adjusted to mimic the actual geography and timing of large historical earthquakes. Both types of models reproduce the statistics of seismicity over five units of magnitude and duplicate other aspects including foreshock and aftershock sequences, migration of foci, and the capacity to produce both characteristic and noncharacteristic earthquakes. Over a period of about 150 yr the complex interaction of fault segments and the nonlinear failure conditions conspire to transform an apparently deterministic model into a chaotic one.
NASA Astrophysics Data System (ADS)
Braman, Kalen; Raman, Venkat
2011-11-01
A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.
Liagkouridis, Ioannis; Cousins, Anna Palm; Cousins, Ian T
2015-08-15
Several groups of flame retardants (FRs) have entered the market in recent years as replacements for polybrominated diphenyl ethers (PBDEs), but little is known about their physical-chemical properties or their environmental transport and fate. Here we make best estimates of the physical-chemical properties and undertake evaluative modelling assessments (indoors and outdoors) for 35 so-called 'novel' and 'emerging' brominated flame retardants (BFRs) and 22 organophosphorus flame retardants (OPFRs). A QSPR (Quantitative Structure-Property Relationship) based technique is used to reduce uncertainty in physical-chemical properties and to aid property selection for modelling, but it is evident that more, high quality property data are required for improving future assessments. Evaluative modelling results show that many of the alternative FRs, mainly alternative BFRs and some of the halogenated OPFRs, behave similarly to the PBDEs both indoors and outdoors. These alternative FRs exhibit high overall persistence (Pov), long-range transport potential (LRTP) and POP-like behaviour and on that basis cannot be regarded as suitable replacements to PBDEs. A group of low molecular weight alternative BFRs and non-halogenated OPFRs show a potentially better environmental performance based on Pov and LRTP metrics. Results must be interpreted with caution though since there are significant uncertainties and limited data to allow for thorough model evaluation. Additional environmental parameters such as toxicity and bioaccumulative potential as well as functionality issues should be considered in an industrial substitution strategy. Copyright © 2015 Elsevier B.V. All rights reserved.
A modeling approach for aerosol optical depth analysis during forest fire events
NASA Astrophysics Data System (ADS)
Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David
2004-10-01
Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.
Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Loyola, D. G.
2017-12-01
The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Winter, Sandra J; Sheats, Jylana L; King, Abby C
2016-01-01
This review examined the use of health behavior change techniques and theory in technology-enabled interventions targeting risk factors and indicators for cardiovascular disease (CVD) prevention and treatment. Articles targeting physical activity, weight loss, smoking cessation and management of hypertension, lipids and blood glucose were sourced from PubMed (November 2010-2015) and coded for use of 1) technology, 2) health behavior change techniques (using the CALO-RE taxonomy), and 3) health behavior theories. Of the 984 articles reviewed, 304 were relevant (240=intervention, 64=review). Twenty-two different technologies were used (M=1.45, SD=+/−0.719). The most frequently used behavior change techniques were self-monitoring and feedback on performance (M=5.4, SD=+/−2.9). Half (52%) of the intervention studies named a theory/model - most frequently Social Cognitive Theory, the Trans-theoretical Model, and the Theory of Planned Behavior/Reasoned Action. To optimize technology-enabled interventions targeting CVD risk factors, integrated behavior change theories that incorporate a variety of evidence-based health behavior change techniques are needed. PMID:26902519
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
NASA Astrophysics Data System (ADS)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Explosively driven two-shockwave tools with applications
NASA Astrophysics Data System (ADS)
Buttler, W. T.; Oró, D. M.; Mariam, F. G.; Saunders, A.; Andrews, M. J.; Cherne, F. J.; Hammerberg, J. E.; Hixson, R. S.; Monfared, S. K.; Morris, C.; Olson, R. T.; Preston, D. L.; Stone, J. B.; Terrones, G.; Tupa, D.; Vogan-McNeil, W.
2014-05-01
We present the development of an explosively driven physics tool to generate two mostly uniaxial shockwaves. The tool is being used to extend single shockwave ejecta models to account for a second shockwave a few microseconds later. We explore techniques to vary the amplitude of both the first and second shockwaves, and we apply the tool experimentally at the Los Alamos National Laboratory Proton Radiography (pRad)facility. The tools have been applied to Sn with perturbations of wavelength λ = 550 μm, and various amplitudes that give wavenumber amplitude products of kh in {3/4,1/2,1/4,1/8}, where h is the perturbation amplitude, and k = 2π/λ is the wavenumber. The pRad data suggest the development of a second shock ejecta model based on unstable Richtmyer-Meshkov physics.
NASA Astrophysics Data System (ADS)
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
Modeling of direct detection Doppler wind lidar. I. The edge technique.
McKay, J A
1998-09-20
Analytic models, based on a convolution of a Fabry-Perot etalon transfer function with a Gaussian spectral source, are developed for the shot-noise-limited measurement precision of Doppler wind lidars based on the edge filter technique by use of either molecular or aerosol atmospheric backscatter. The Rayleigh backscatter formulation yields a map of theoretical sensitivity versus etalon parameters, permitting design optimization and showing that the optimal system will have a Doppler measurement uncertainty no better than approximately 2.4 times that of a perfect, lossless receiver. An extension of the models to include the effect of limited etalon aperture leads to a condition for the minimum aperture required to match light collection optics. It is shown that, depending on the choice of operating point, the etalon aperture finesse must be 4-15 to avoid degradation of measurement precision. A convenient, closed-form expression for the measurement precision is obtained for spectrally narrow backscatter and is shown to be useful for backscatter that is spectrally broad as well. The models are extended to include extrinsic noise, such as solar background or the Rayleigh background on an aerosol Doppler lidar. A comparison of the model predictions with experiment has not yet been possible, but a comparison with detailed instrument modeling by McGill and Spinhirne shows satisfactory agreement. The models derived here will be more conveniently implemented than McGill and Spinhirne's and more readily permit physical insights to the optimization and limitations of the double-edge technique.
Developing an Automated Method for Detection of Operationally Relevant Ocean Fronts and Eddies
NASA Astrophysics Data System (ADS)
Rogers-Cotrone, J. D.; Cadden, D. D. H.; Rivera, P.; Wynn, L. L.
2016-02-01
Since the early 90's, the U.S. Navy has utilized an observation-based process for identification of frontal systems and eddies. These Ocean Feature Assessments (OFA) rely on trained analysts to identify and position ocean features using satellite-observed sea surface temperatures. Meanwhile, as enhancements and expansion of the navy's Hybrid Coastal Ocean Model (HYCOM) and Regional Navy Coastal Ocean Model (RNCOM) domains have proceeded, the Naval Oceanographic Office (NAVO) has provided Tactical Oceanographic Feature Assessments (TOFA) that are based on data-validated model output but also rely on analyst identification of significant features. A recently completed project has migrated OFA production to the ArcGIS-based Acoustic Reach-back Cell Ocean Analysis Suite (ARCOAS), enabling use of additional observational datasets and significantly decreasing production time; however, it has highlighted inconsistencies inherent to this analyst-based identification process. Current efforts are focused on development of an automated method for detecting operationally significant fronts and eddies that integrates model output and observational data on a global scale. Previous attempts to employ techniques from the scientific community have been unable to meet the production tempo at NAVO. Thus, a system that incorporates existing techniques (Marr-Hildreth, Okubo-Weiss, etc.) with internally-developed feature identification methods (from model-derived physical and acoustic properties) is required. Ongoing expansions to the ARCOAS toolset have shown promising early results.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
The role of data fusion in predictive maintenance using digital twin
NASA Astrophysics Data System (ADS)
Liu, Zheng; Meyendorf, Norbert; Mrad, Nezih
2018-04-01
Modern aerospace industry is migrating from reactive to proactive and predictive maintenance to increase platform operational availability and efficiency, extend its useful life cycle and reduce its life cycle cost. Multiphysics modeling together with data-driven analytics generate a new paradigm called "Digital Twin." The digital twin is actually a living model of the physical asset or system, which continually adapts to operational changes based on the collected online data and information, and can forecast the future of the corresponding physical counterpart. This paper reviews the overall framework to develop a digital twin coupled with the industrial Internet of Things technology to advance aerospace platforms autonomy. Data fusion techniques particularly play a significant role in the digital twin framework. The flow of information from raw data to high-level decision making is propelled by sensor-to-sensor, sensor-to-model, and model-to-model fusion. This paper further discusses and identifies the role of data fusion in the digital twin framework for aircraft predictive maintenance.
Runkel, Robert L.
2010-01-01
OTEQ is a mathematical simulation model used to characterize the fate and transport of waterborne solutes in streams and rivers. The model is formed by coupling a solute transport model with a chemical equilibrium submodel. The solute transport model is based on OTIS, a model that considers the physical processes of advection, dispersion, lateral inflow, and transient storage. The equilibrium submodel is based on MINTEQ, a model that considers the speciation and complexation of aqueous species, acid-base reactions, precipitation/dissolution, and sorption. Within OTEQ, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (waterborne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach. The model's ability to simulate pH, precipitation/dissolution, and pH-dependent sorption provides a means of evaluating the complex interactions between instream chemistry and hydrologic transport at the field scale. This report details the development and application of OTEQ. Sections of the report describe model theory, input/output specifications, model applications, and installation instructions. OTEQ may be obtained over the Internet at http://water.usgs.gov/software/OTEQ.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, S A; Trunov, V I; Pestryakov, Efim V
2013-05-31
We have developed a technique for investigating the evolution of spatial inhomogeneities in high-power laser systems based on multi-stage parametric amplification. A linearised model of the inhomogeneity development is first devised for parametric amplification with the small-scale self-focusing taken into account. It is shown that the application of this model gives the results consistent (with high accuracy and in a wide range of inhomogeneity parameters) with the calculation without approximations. Using the linearised model, we have analysed the development of spatial inhomogeneities in a petawatt laser system based on multi-stage parametric amplification, developed at the Institute of Laser Physics, Siberianmore » Branch of the Russian Academy of Sciences (ILP SB RAS). (control of laser radiation parameters)« less
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
Advanced Ground Systems Maintenance Physics Models For Diagnostics Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M.
2015-01-01
The project will use high-fidelity physics models and simulations to simulate real-time operations of cryogenic and systems and calculate the status/health of the systems. The project enables the delivery of system health advisories to ground system operators. The capability will also be used to conduct planning and analysis of cryogenic system operations. This project will develop and implement high-fidelity physics-based modeling techniques tosimulate the real-time operation of cryogenics and other fluids systems and, when compared to thereal-time operation of the actual systems, provide assessment of their state. Physics-modelcalculated measurements (called “pseudo-sensors”) will be compared to the system real-timedata. Comparison results will be utilized to provide systems operators with enhanced monitoring ofsystems' health and status, identify off-nominal trends and diagnose system/component failures.This capability can also be used to conduct planning and analysis of cryogenics and other fluidsystems designs. This capability will be interfaced with the ground operations command andcontrol system as a part of the Advanced Ground Systems Maintenance (AGSM) project to helpassure system availability and mission success. The initial capability will be developed for theLiquid Oxygen (LO2) ground loading systems.
NASA Astrophysics Data System (ADS)
Tinoco, R. O.; Goldstein, E. B.; Coco, G.
2016-12-01
We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
NASA Astrophysics Data System (ADS)
Kusumawati, Intan; Marwoto, Putut; Linuwih, Suharto
2015-09-01
The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative-quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM-learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.
Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David
2016-01-01
Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698038
Cross-sectional mapping for refined beam elements with applications to shell-like structures
NASA Astrophysics Data System (ADS)
Pagani, A.; de Miguel, A. G.; Carrera, E.
2017-06-01
This paper discusses the use of higher-order mapping functions for enhancing the physical representation of refined beam theories. Based on the Carrera unified formulation (CUF), advanced one-dimensional models are formulated by expressing the displacement field as a generic expansion of the generalized unknowns. According to CUF, a novel physically/geometrically consistent model is devised by employing Legendre-like polynomial sets to approximate the generalized unknowns at the cross-sectional level, whereas a local mapping technique based on the blending functions method is used to describe the exact physical boundaries of the cross-section domain. Classical and innovative finite element methods, including hierarchical p-elements and locking-free integration schemes, are utilized to solve the governing equations of the unified beam theory. Several numerical applications accounting for small displacements/rotations and strains are discussed, including beam structures with cross-sectional curved edges, cylindrical shells, and thin-walled aeronautical wing structures with reinforcements. The results from the proposed methodology are widely assessed by comparisons with solutions from the literature and commercial finite element software tools. The attention is focussed on the high computational efficiency and the marked capabilities of the present beam model, which can deal with a broad spectrum of structural problems with unveiled accuracy in terms of geometrical representation of the domain boundaries.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...
2017-05-23
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
The space shuttle payload planning working groups. Volume 8: Earth and ocean physics
NASA Technical Reports Server (NTRS)
1973-01-01
The findings and recommendations of the Earth and Ocean Physics working group of the space shuttle payload planning activity are presented. The requirements for the space shuttle mission are defined as: (1) precision measurement for earth and ocean physics experiments, (2) development and demonstration of new and improved sensors and analytical techniques, (3) acquisition of surface truth data for evaluation of new measurement techniques, (4) conduct of critical experiments to validate geophysical phenomena and instrumental results, and (5) development and validation of analytical/experimental models for global ocean dynamics and solid earth dynamics/earthquake prediction. Tables of data are presented to show the flight schedule estimated costs, and the mission model.
Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992
1992-01-01
previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Technical Reports Server (NTRS)
Merola, John A.
1989-01-01
The LANDSAT Thematic Mapper (TM) scanner records reflected solar energy from the earth's surface in six wavelength regions, or bands, and one band that records emitted energy in the thermal region, giving a total of seven bands. Useful research was extracted about terrain morphometry from remote sensing measurements and this information is used in an image-based terrain model for selected coastal geomorphic features in the Great Salt Lake Desert (GSLD). Technical developments include the incorporation of Aerial Profiling of Terrain System (APTS) data in satellite image analysis, and the production and use of 3-D surface plots of TM reflectance data. Also included in the technical developments is the analysis of the ground control point spatial distribution and its affects on geometric correction, and the terrain mapping procedure; using satellite data in a way that eliminates the need to degrade the data by resampling. The most common approach for terrain mapping with multispectral scanner data includes the techniques of pattern recognition and image classification, as opposed to direct measurement of radiance for identification of terrain features. The research approach in this investigation was based on an understanding of the characteristics of reflected light resulting from the variations in moisture and geometry related to terrain as described by the physical laws of radiative transfer. The image-based terrain model provides quantitative information about the terrain morphometry based on the physical relationship between TM data, the physical character of the GSLD, and the APTS measurements.
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G
2007-01-01
Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.
Modeling food matrix effects on chemical reactivity: Challenges and perspectives.
Capuano, Edoardo; Oliviero, Teresa; van Boekel, Martinus A J S
2017-06-29
The same chemical reaction may be different in terms of its position of the equilibrium (i.e., thermodynamics) and its kinetics when studied in different foods. The diversity in the chemical composition of food and in its structural organization at macro-, meso-, and microscopic levels, that is, the food matrix, is responsible for this difference. In this viewpoint paper, the multiple, and interconnected ways the food matrix can affect chemical reactivity are summarized. Moreover, mechanistic and empirical approaches to explain and predict the effect of food matrix on chemical reactivity are described. Mechanistic models aim to quantify the effect of food matrix based on a detailed understanding of the chemical and physical phenomena occurring in food. Their applicability is limited at the moment to very simple food systems. Empirical modeling based on machine learning combined with data-mining techniques may represent an alternative, useful option to predict the effect of the food matrix on chemical reactivity and to identify chemical and physical properties to be further tested. In such a way the mechanistic understanding of the effect of the food matrix on chemical reactions can be improved.
Finite element model correlation of a composite UAV wing using modal frequencies
NASA Astrophysics Data System (ADS)
Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.
2007-04-01
The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.
Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon
2017-10-01
The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.
Evaluating data-driven causal inference techniques in noisy physical and ecological systems
NASA Astrophysics Data System (ADS)
Tennant, C.; Larsen, L.
2016-12-01
Causal inference from observational time series challenges traditional approaches for understanding processes and offers exciting opportunities to gain new understanding of complex systems where nonlinearity, delayed forcing, and emergent behavior are common. We present a formal evaluation of the performance of convergent cross-mapping (CCM) and transfer entropy (TE) for data-driven causal inference under real-world conditions. CCM is based on nonlinear state-space reconstruction, and causality is determined by the convergence of prediction skill with an increasing number of observations of the system. TE is the uncertainty reduction based on transition probabilities of a pair of time-lagged variables. With TE, causal inference is based on asymmetry in information flow between the variables. Observational data and numerical simulations from a number of classical physical and ecological systems: atmospheric convection (the Lorenz system), species competition (patch-tournaments), and long-term climate change (Vostok ice core) were used to evaluate the ability of CCM and TE to infer causal-relationships as data series become increasingly corrupted by observational (instrument-driven) or process (model-or -stochastic-driven) noise. While both techniques show promise for causal inference, TE appears to be applicable to a wider range of systems, especially when the data series are of sufficient length to reliably estimate transition probabilities of system components. Both techniques also show a clear effect of observational noise on causal inference. For example, CCM exhibits a negative logarithmic decline in prediction skill as the noise level of the system increases. Changes in TE strongly depend on noise type and which variable the noise was added to. The ability of CCM and TE to detect driving influences suggest that their application to physical and ecological systems could be transformative for understanding driving mechanisms as Earth systems undergo change.
NASA Astrophysics Data System (ADS)
Pascuet, M. I.; Castin, N.; Becquart, C. S.; Malerba, L.
2011-05-01
An atomistic kinetic Monte Carlo (AKMC) method has been applied to study the stability and mobility of copper-vacancy clusters in Fe. This information, which cannot be obtained directly from experimental measurements, is needed to parameterise models describing the nanostructure evolution under irradiation of Fe alloys (e.g. model alloys for reactor pressure vessel steels). The physical reliability of the AKMC method has been improved by employing artificial intelligence techniques for the regression of the activation energies required by the model as input. These energies are calculated allowing for the effects of local chemistry and relaxation, using an interatomic potential fitted to reproduce them as accurately as possible and the nudged-elastic-band method. The model validation was based on comparison with available ab initio calculations for verification of the used cohesive model, as well as with other models and theories.
Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E
2018-06-01
Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
The physical and biological basis of quantitative parameters derived from diffusion MRI
2012-01-01
Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085
High-Quality 3d Models and Their Use in a Cultural Heritage Conservation Project
NASA Astrophysics Data System (ADS)
Tucci, G.; Bonora, V.; Conti, A.; Fiorini, L.
2017-08-01
Cultural heritage digitization and 3D modelling processes are mainly based on laser scanning and digital photogrammetry techniques to produce complete, detailed and photorealistic three-dimensional surveys: geometric as well as chromatic aspects, in turn testimony of materials, work techniques, state of preservation, etc., are documented using digitization processes. The paper explores the topic of 3D documentation for conservation purposes; it analyses how geomatics contributes in different steps of a restoration process and it presents an overview of different uses of 3D models for the conservation and enhancement of the cultural heritage. The paper reports on the project to digitize the earthenware frieze of the Ospedale del Ceppo in Pistoia (Italy) for 3D documentation, restoration work support, and digital and physical reconstruction and integration purposes. The intent to design an exhibition area suggests new ways to take advantage of 3D data originally acquired for documentation and scientific purposes.
Novel secret key generation techniques using memristor devices
NASA Astrophysics Data System (ADS)
Abunahla, Heba; Shehada, Dina; Yeun, Chan Yeob; Mohammad, Baker; Jaoude, Maguy Abi
2016-02-01
This paper proposes novel secret key generation techniques using memristor devices. The approach depends on using the initial profile of a memristor as a master key. In addition, session keys are generated using the master key and other specified parameters. In contrast to existing memristor-based security approaches, the proposed development is cost effective and power efficient since the operation can be achieved with a single device rather than a crossbar structure. An algorithm is suggested and demonstrated using physics based Matlab model. It is shown that the generated keys can have dynamic size which provides perfect security. Moreover, the proposed encryption and decryption technique using the memristor based generated keys outperforms Triple Data Encryption Standard (3DES) and Advanced Encryption Standard (AES) in terms of processing time. This paper is enriched by providing characterization results of a fabricated microscale Al/TiO2/Al memristor prototype in order to prove the concept of the proposed approach and study the impacts of process variations. The work proposed in this paper is a milestone towards System On Chip (SOC) memristor based security.
Modeling of Aerosol Optical Depth Variability during the 1998 Canadian Forest Fire Smoke Event
NASA Astrophysics Data System (ADS)
Aubé, M.; O`Neill, N. T.; Royer, A.; Lavoué, D.
2003-04-01
Monitoring of aerosol optical depth (AOD) is of particular importance due to the significant role of aerosols in the atmospheric radiative budget. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as based DDV (Dense Dark Vegetation) inversion algorithms which extract AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new methodology that links AOD measurements and particulate matter Transport Model using a data assimilation approach. This modelling package (AODSEM for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian-Eulerian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution is tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important but crude parameter. We applied this methodology to a significant smoke event that occurred over Canada in august 1998. The results show the potential of this approach inasmuch as residuals between AODSEM assimilated analysis and measurements are smaller than typical errors associated to remotely sensed AOD (satellite or ground based). The AODSEM assimilation approach also gives better results than classical interpolation techniques. This improvement is especially evident when the available number of AOD measurements is small.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
NASA Astrophysics Data System (ADS)
Medjahdi, Yahia; Terré, Michel; Ruyet, Didier Le; Roviras, Daniel
2014-12-01
In this paper, we investigate the impact of timing asynchronism on the performance of multicarrier techniques in a spectrum coexistence context. Two multicarrier schemes are considered: cyclic prefix-based orthogonal frequency division multiplexing (CP-OFDM) with a rectangular pulse shape and filter bank-based multicarrier (FBMC) with physical layer for dynamic spectrum access and cognitive radio (PHYDYAS) and isotropic orthogonal transform algorithm (IOTA) waveforms. First, we present the general concept of the so-called power spectral density (PSD)-based interference tables which are commonly used for multicarrier interference characterization in spectrum sharing context. After highlighting the limits of this approach, we propose a new family of interference tables called `instantaneous interference tables'. The proposed tables give the interference power caused by a given interfering subcarrier on a victim one, not only as a function of the spectral distance separating both subcarriers but also with respect to the timing misalignment between the subcarrier holders. In contrast to the PSD-based interference tables, the accuracy of the proposed tables has been validated through different simulation results. Furthermore, due to the better frequency localization of both PHYDYAS and IOTA waveforms, FBMC technique is demonstrated to be more robust to timing asynchronism compared to OFDM one. Such a result makes FBMC a potential candidate for the physical layer of future cognitive radio systems.
The Physics of a Gymnastics Flight Element
ERIC Educational Resources Information Center
Contakos, Jonas; Carlton, Les G.; Thompson, Bruce; Suddaby, Rick
2009-01-01
From its inception, performance in the sport of gymnastics has relied on the laws of physics to create movement patterns and static postures that appear almost impossible. In general, gymnastics is physics in motion and can provide an ideal framework for studying basic human modeling techniques and physical principles. Using low-end technology and…
SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices
NASA Astrophysics Data System (ADS)
Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2017-08-01
Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.
Organizational Constraints and Goal Setting
ERIC Educational Resources Information Center
Putney, Frederick B.; Wotman, Stephen
1978-01-01
Management modeling techniques are applied to setting operational and capital goals using cost analysis techniques in this case study at the Columbia University School of Dental and Oral Surgery. The model was created as a planning tool used in developing a financially feasible operating plan and a 100 percent physical renewal plan. (LBH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roxas, R. M.; Monterola, C.; Carreon-Monterola, S. L.
2010-07-28
We probe the effect of seating arrangement, group composition and group-based competition on students' performance in Physics using a teaching technique adopted from Mazur's peer instruction method. Ninety eight lectures, involving 2339 students, were conducted across nine learning institutions from February 2006 to June 2009. All the lectures were interspersed with student interaction opportunities (SIO), in which students work in groups to discuss and answer concept tests. Two individual assessments were administered before and after the SIO. The ratio of the post-assessment score to the pre-assessment score and the Hake factor were calculated to establish the improvement in student performance.more » Using actual assessment results and neural network (NN) modeling, an optimal seating arrangement for a class was determined based on student seating location. The NN model also provided a quantifiable method for sectioning students. Lastly, the study revealed that competition-driven interactions increase within-group cooperation and lead to higher improvement on the students' performance.« less
Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T
2017-12-15
Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.
Kilic, Niyazi; Hosgormez, Erkan
2016-03-01
Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.
NASA Astrophysics Data System (ADS)
Sun, Jiajia; Li, Yaoguo
2017-02-01
Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.
Snelson, Catherine M.; Abbott, Robert E.; Broome, Scott T.; ...
2013-07-02
A series of chemical explosions, called the Source Physics Experiments (SPE), is being conducted under the auspices of the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to develop a new more physics-based paradigm for nuclear test monitoring. Currently, monitoring relies on semi-empirical models to discriminate explosions from earthquakes and to estimate key parameters such as yield. While these models have been highly successful monitoring established test sites, there is concern that future tests could occur in media and at scale depths of burial outside of our empirical experience. This is highlighted by North Korean tests, which exhibit poormore » performance of a reliable discriminant, mb:Ms (Selby et al., 2012), possibly due to source emplacement and differences in seismic responses for nascent and established test sites. The goal of SPE is to replace these semi-empirical relationships with numerical techniques grounded in a physical basis and thus applicable to any geologic setting or depth.« less
NASA Technical Reports Server (NTRS)
Tin, Padetha; deGroh, Henry C., III.
2003-01-01
Succinonitrile has been and is being used extensively in NASA's Microgravity Materials Science and Fluid Physics programs and as well as in several ground-based and microgravity studies including the Isothermal Dendritic Growth Experiment (IDGE). Succinonitrile (SCN) is useful as a model for the study of metal solidification, although it is an organic material, it has a BCC crystal structure and solidifies dendriticly like a metal. It is also transparent and has a low melting point (58.08 C). Previous measurements of succinonitrile (SCN) and alloys of succinonitrile and acetone surface tensions are extremely limited. Using the Surface Light Scattering technique we have determined non invasively, the surface tension and viscosity of SCN and SCN-Acetone Alloys at different temperatures. This relatively new and unique technique has several advantages over the classical methods such as, it is non invasive, has good accuracy and measures the surface tension and viscosity simultaneously. The accuracy of interfacial energy values obtained from this technique is better than 2% and viscosity about 10 %. Succinonitrile and succinonitrile-acetone alloys are well-established model materials with several essential physical properties accurately known - except the liquid/vapor surface tension at different elevated temperatures. We will be presenting the experimentally determined liquid/vapor surface energy and liquid viscosity of succinonitrile and succinonitrile-acetone alloys in the temperature range from their melting point to around 100 C using this non-invasive technique. We will also discuss about the measurement technique and new developments of the Surface Light Scattering Spectrometer.
Control volume based hydrocephalus research; analysis of human data
NASA Astrophysics Data System (ADS)
Cohen, Benjamin; Wei, Timothy; Voorhees, Abram; Madsen, Joseph; Anor, Tomer
2010-11-01
Hydrocephalus is a neuropathophysiological disorder primarily diagnosed by increased cerebrospinal fluid volume and pressure within the brain. To date, utilization of clinical measurements have been limited to understanding of the relative amplitude and timing of flow, volume and pressure waveforms; qualitative approaches without a clear framework for meaningful quantitative comparison. Pressure volume models and electric circuit analogs enforce volume conservation principles in terms of pressure. Control volume analysis, through the integral mass and momentum conservation equations, ensures that pressure and volume are accounted for using first principles fluid physics. This approach is able to directly incorporate the diverse measurements obtained by clinicians into a simple, direct and robust mechanics based framework. Clinical data obtained for analysis are discussed along with data processing techniques used to extract terms in the conservation equation. Control volume analysis provides a non-invasive, physics-based approach to extracting pressure information from magnetic resonance velocity data that cannot be measured directly by pressure instrumentation.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
Biological applications of confocal fluorescence polarization microscopy
NASA Astrophysics Data System (ADS)
Bigelow, Chad E.
Fluorescence polarization microscopy is a powerful modality capable of sensing changes in the physical properties and local environment of fluorophores. In this thesis we present new applications for the technique in cancer diagnosis and treatment and explore the limits of the modality in scattering media. We describe modifications to our custom-built confocal fluorescence microscope that enable dual-color imaging, optical fiber-based confocal spectroscopy and fluorescence polarization imaging. Experiments are presented that indicate the performance of the instrument for all three modalities. The limits of confocal fluorescence polarization imaging in scattering media are explored and the microscope parameters necessary for accurate polarization images in this regime are determined. A Monte Carlo routine is developed to model the effect of scattering on images. Included in it are routines to track the polarization state of light using the Mueller-Stokes formalism and a model for fluorescence generation that includes sampling the excitation light polarization ellipse, Brownian motion of excited-state fluorophores in solution, and dipole fluorophore emission. Results from this model are compared to experiments performed on a fluorophore-embedded polymer rod in a turbid medium consisting of polystyrene microspheres in aqueous suspension. We demonstrate the utility of the fluorescence polarization imaging technique for removal of contaminating autofluorescence and for imaging photodynamic therapy drugs in cell monolayers. Images of cells expressing green fluorescent protein are extracted from contaminating fluorescein emission. The distribution of meta-tetrahydroxypheny1chlorin in an EMT6 cell monolayer is also presented. A new technique for imaging enzyme activity is presented that is based on observing changes in the anisotropy of fluorescently-labeled substrates. Proof-of-principle studies are performed in a model system consisting of fluorescently labeled bovine serum albumin attached to sepharose beads. The action of trypsin and proteinase K on the albumin is monitored to demonstrate validity of the technique. Images of the processing of the albumin in J774 murine macrophages are also presented indicating large intercellular differences in enzyme activity. Future directions for the technique are also presented, including the design of enzyme probes specific for prostate specific antigen based on fluorescently-labeled dendrimers. A technique for enzyme imaging based on extracellular autofluorescence is also proposed.
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Cortes Arevalo, Juliette; Alfonso, Leonardo; Wehn, Uta; Norbiato, Daniele; Monego, Martina; Ferri, Michele; Solomatine, Dimitri
2017-04-01
In the past years, a number of methods have been proposed to reduce uncertainty in flood prediction by means of model updating techniques. Traditional physical observations are usually integrated into hydrological and hydraulic models to improve model performances and consequent flood predictions. Nowadays, low-cost sensors can be used for crowdsourced observations. Different type of social sensors can measure, in a more distributed way, physical variables such as precipitation and water level. However, these crowdsourced observations are not integrated into a real-time fashion into water-system models due to their varying accuracy and random spatial-temporal coverage. We assess the effect in model performance due to the assimilation of crowdsourced observations of water level. Our method consists in (1) implementing a Kalman filter into a cascade of hydrological and hydraulic models. (2) defining observation errors depending on the type of sensor either physical or social. Randomly distributed errors are based on accuracy ranges that slightly improve according to the citizens' expertise level. (3) Using a simplified social model to realistically represent citizen engagement levels based on population density and citizens' motivation scenarios. To test our method, we synthetically derive crowdsourced observations for different citizen engagement levels from a distributed network of physical and social sensors. The observations are assimilated during a particular flood event occurred in the Bacchiglione catchment, Italy. The results of this study demonstrate that sharing crowdsourced water level observations (often motivated by a feeling of belonging to a community of friends) can help in improving flood prediction. On the other hand, a growing participation of individual citizens or weather enthusiasts sharing hydrological observations in cities can help to improve model performance. This study is a first step to assess the effects of crowdsourced observations in flood model predictions. Effective communication and feedback about the quality of observations from water authorities to engaged citizens are further required to minimize their intrinsic low-variable accuracy.
Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping
2015-09-15
Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
NASA Astrophysics Data System (ADS)
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
Statistical and engineering methods for model enhancement
NASA Astrophysics Data System (ADS)
Chang, Chia-Jung
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
Durant, Nefertiti H; Joseph, Rodney P; Cherrington, Andrea; Cuffee, Yendelela; Knight, BernNadette; Lewis, Dwight; Allison, Jeroan J
2014-01-16
Innovative approaches are needed to promote physical activity among young adult overweight and obese African American women. We sought to describe key elements that African American women desire in a culturally relevant Internet-based tool to promote physical activity among overweight and obese young adult African American women. A mixed-method approach combining nominal group technique and traditional focus groups was used to elicit recommendations for the development of an Internet-based physical activity promotion tool. Participants, ages 19 to 30 years, were enrolled in a major university. Nominal group technique sessions were conducted to identify themes viewed as key features for inclusion in a culturally relevant Internet-based tool. Confirmatory focus groups were conducted to verify and elicit more in-depth information on the themes. Twenty-nine women participated in nominal group (n = 13) and traditional focus group sessions (n = 16). Features that emerged to be included in a culturally relevant Internet-based physical activity promotion tool were personalized website pages, diverse body images on websites and in videos, motivational stories about physical activity and women similar to themselves in size and body shape, tips on hair care maintenance during physical activity, and online social support through social media (eg, Facebook, Twitter). Incorporating existing social media tools and motivational stories from young adult African American women in Internet-based tools may increase the feasibility, acceptability, and success of Internet-based physical activity programs in this high-risk, understudied population.
Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins
NASA Astrophysics Data System (ADS)
Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter
2017-10-01
Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.
Design Mining Interacting Wind Turbines.
Preen, Richard J; Bull, Larry
2016-01-01
An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions were made. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogate-assisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined.
Modeling of Microstructure Evolution During Alloy Solidification
NASA Astrophysics Data System (ADS)
Zhu, Mingfang; Pan, Shiyan; Sun, Dongke
In recent years, considerable advances have been achieved in the numerical modeling of microstructure evolution during solidification. This paper presents the models based on the cellular automaton (CA) technique and lattice Boltzmann method (LBM), which can reproduce a wide variety of solidification microstructure features observed experimentally with an acceptable computational efficiency. The capabilities of the models are addressed by presenting representative examples encompassing a broad variety of issues, such as the evolution of dendritic structure and microsegregation in two and three dimensions, dendritic growth in the presence of convection, divorced eutectic solidification of spheroidal graphite irons, and gas porosity formation. The simulations offer insights into the underlying physics of microstructure formation during alloy solidification.
Uncertainty Quantification in Aeroelasticity
NASA Astrophysics Data System (ADS)
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Visualizing and Quantifying Pore Scale Fluid Flow Processes With X-ray Microtomography
NASA Astrophysics Data System (ADS)
Wildenschild, D.; Hopmans, J. W.; Vaz, C. M.; Rivers, M. L.
2001-05-01
When using mathematical models based on Darcy's law it is often necessary to simplify geometry, physics or both and the capillary bundle-of-tubes approach neglects a fundamentally important characteristic of porous solids, namely interconnectedness of the pore space. New approaches to pore-scale modeling that arrange capillary tubes in two- or three-dimensional pore space have been and are still under development: Network models generally represent the pore space by spheres while the pore throats are usually represented by cylinders or conical shapes. Lattice Boltzmann approaches numerically solve the Navier-Stokes equations in a realistic microscopically disordered geometry, which offers the ability to study the microphysical basis of macroscopic flow without the need for a simplified geometry or physics. In addition to these developments in numerical modeling techniques, new theories have proposed that interfacial area should be considered as a primary variable in modeling of a multi-phase flow system. In the wake of this progress emerges an increasing need for new ways of evaluating pore-scale models, and for techniques that can resolve and quantify phase interfaces in porous media. The mechanisms operating at the pore-scale cannot be measured with traditional experimental techniques, however x-ray computerized microtomography (CMT) provides non-invasive observation of, for instance, changing fluid phase content and distribution on the pore scale. Interfacial areas have thus far been measured indirectly, but with the advances in high-resolution imaging using CMT it is possible to track interfacial area and curvature as a function of phase saturation or capillary pressure. We present results obtained at the synchrotron-based microtomography facility (GSECARS, sector 13) at the Advanced Photon Source at Argonne National Laboratory. Cylindrical sand samples of either 6 or 1.5 mm diameter were scanned at different stages of drainage and for varying boundary conditions. A significant difference in fluid saturation and phase distribution was observed for different drainage conditions, clearly showing preferential flow and a dependence on the applied flow rate. For the 1.5 mm sample individual pores and water/air interfaces could be resolved and quantified using image analysis techniques. Use of the Advanced Photon Source was supported by the U.S. Department of Energy, Basic Energy Sciences, Office of Science, under Contract No. W-31-109-Eng-38.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice
NASA Astrophysics Data System (ADS)
Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.
2016-12-01
The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.
Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.
Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D
2011-05-01
Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael S. Zhdanov
2005-03-09
The research during the first year of the project was focused on developing the foundations of a new geophysical technique for mineral exploration and mineral discrimination, based on electromagnetic (EM) methods. The proposed new technique is based on examining the spectral induced polarization effects in electromagnetic data using modern distributed acquisition systems and advanced methods of 3-D inversion. The analysis of IP phenomena is usually based on models with frequency dependent complex conductivity distribution. One of the most popular is the Cole-Cole relaxation model. In this progress report we have constructed and analyzed a different physical and mathematical model ofmore » the IP effect based on the effective-medium theory. We have developed a rigorous mathematical model of multi-phase conductive media, which can provide a quantitative tool for evaluation of the type of mineralization, using the conductivity relaxation model parameters. The parameters of the new conductivity relaxation model can be used for discrimination of the different types of rock formations, which is an important goal in mineral exploration. The solution of this problem requires development of an effective numerical method for EM forward modeling in 3-D inhomogeneous media. During the first year of the project we have developed a prototype 3-D IP modeling algorithm using the integral equation (IP) method. Our IE forward modeling code INTEM3DIP is based on the contraction IE method, which improves the convergence rate of the iterative solvers. This code can handle various types of sources and receivers to compute the effect of a complex resistivity model. We have tested the working version of the INTEM3DIP code for computer simulation of the IP data for several models including a southwest US porphyry model and a Kambalda-style nickel sulfide deposit. The numerical modeling study clearly demonstrates how the various complex resistivity models manifest differently in the observed EM data. These modeling studies lay a background for future development of the IP inversion method, directed at determining the electrical conductivity and the intrinsic chargeability distributions, as well as the other parameters of the relaxation model simultaneously. The new technology envisioned in this proposal, will be used for the discrimination of different rocks, and in this way will provide an ability to distinguish between uneconomic mineral deposits and the location of zones of economic mineralization and geothermal resources.« less
NASA Astrophysics Data System (ADS)
Lotfy, K.; Sarkar, N.
2017-11-01
In this work, a novel generalized model of photothermal theory with two-temperature thermoelasticity theory based on memory-dependent derivative (MDD) theory is performed. A one-dimensional problem for an elastic semiconductor material with isotropic and homogeneous properties has been considered. The problem is solved with a new model (MDD) under the influence of a mechanical force with a photothermal excitation. The Laplace transform technique is used to remove the time-dependent terms in the governing equations. Moreover, the general solutions of some physical fields are obtained. The surface taken into consideration is free of traction and subjected to a time-dependent thermal shock. The numerical Laplace inversion is used to obtain the numerical results of the physical quantities of the problem. Finally, the obtained results are presented and discussed graphically.
Liao, David; Tlsty, Thea D.
2014-01-01
The use of mathematical equations to analyse population dynamics measurements is being increasingly applied to elucidate complex dynamic processes in biological systems, including cancer. Purely ‘empirical’ equations may provide sufficient accuracy to support predictions and therapy design. Nevertheless, interpretation of fitting equations in terms of physical and biological propositions can provide additional insights that can be used both to refine models that prove inconsistent with data and to understand the scope of applicability of models that validate. The purpose of this tutorial is to assist readers in mathematically associating interpretations with equations and to provide guidance in choosing interpretations and experimental systems to investigate based on currently available biological knowledge, techniques in mathematical and computational analysis and methods for in vitro and in vivo experiments. PMID:25097752
Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics
ERIC Educational Resources Information Center
Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano
2017-01-01
We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…
Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F
2014-02-01
This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.
Electromagnetomechanical elastodynamic model for Lamb wave damage quantification in composites
NASA Astrophysics Data System (ADS)
Borkowski, Luke; Chattopadhyay, Aditi
2014-03-01
Physics-based wave propagation computational models play a key role in structural health monitoring (SHM) and the development of improved damage quantification methodologies. Guided waves (GWs), such as Lamb waves, provide the capability to monitor large plate-like aerospace structures with limited actuators and sensors and are sensitive to small scale damage; however due to the complex nature of GWs, accurate and efficient computation tools are necessary to investigate the mechanisms responsible for dispersion, coupling, and interaction with damage. In this paper, the local interaction simulation approach (LISA) coupled with the sharp interface model (SIM) solution methodology is used to solve the fully coupled electro-magneto-mechanical elastodynamic equations for the piezoelectric and piezomagnetic actuation and sensing of GWs in fiber reinforced composite material systems. The final framework provides the full three-dimensional displacement as well as electrical and magnetic potential fields for arbitrary plate and transducer geometries and excitation waveform and frequency. The model is validated experimentally and proven computationally efficient for a laminated composite plate. Studies are performed with surface bonded piezoelectric and embedded piezomagnetic sensors to gain insight into the physics of experimental techniques used for SHM. The symmetric collocation of piezoelectric actuators is modeled to demonstrate mode suppression in laminated composites for the purpose of damage detection. The effect of delamination and damage (i.e., matrix cracking) on the GW propagation is demonstrated and quantified. The developed model provides a valuable tool for the improvement of SHM techniques due to its proven accuracy and computational efficiency.
3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.
2015-01-01
3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722
Perceptually relevant parameters for virtual listening simulation of small room acoustics
Zahorik, Pavel
2009-01-01
Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043
Gumí-Audenis, Berta; Costa, Luca; Carlá, Francesco; Comin, Fabio; Sanz, Fausto; Giannotti, Marina I
2016-12-19
Biological membranes mediate several biological processes that are directly associated with their physical properties but sometimes difficult to evaluate. Supported lipid bilayers (SLBs) are model systems widely used to characterize the structure of biological membranes. Cholesterol (Chol) plays an essential role in the modulation of membrane physical properties. It directly influences the order and mechanical stability of the lipid bilayers, and it is known to laterally segregate in rafts in the outer leaflet of the membrane together with sphingolipids (SLs). Atomic force microscope (AFM) is a powerful tool as it is capable to sense and apply forces with high accuracy, with distance and force resolution at the nanoscale, and in a controlled environment. AFM-based force spectroscopy (AFM-FS) has become a crucial technique to study the nanomechanical stability of SLBs by controlling the liquid media and the temperature variations. In this contribution, we review recent AFM and AFM-FS studies on the effect of Chol on the morphology and mechanical properties of model SLBs, including complex bilayers containing SLs. We also introduce a promising combination of AFM and X-ray (XR) techniques that allows for in situ characterization of dynamic processes, providing structural, morphological, and nanomechanical information.
Gumí-Audenis, Berta; Costa, Luca; Carlá, Francesco; Comin, Fabio; Sanz, Fausto; Giannotti, Marina I.
2016-01-01
Biological membranes mediate several biological processes that are directly associated with their physical properties but sometimes difficult to evaluate. Supported lipid bilayers (SLBs) are model systems widely used to characterize the structure of biological membranes. Cholesterol (Chol) plays an essential role in the modulation of membrane physical properties. It directly influences the order and mechanical stability of the lipid bilayers, and it is known to laterally segregate in rafts in the outer leaflet of the membrane together with sphingolipids (SLs). Atomic force microscope (AFM) is a powerful tool as it is capable to sense and apply forces with high accuracy, with distance and force resolution at the nanoscale, and in a controlled environment. AFM-based force spectroscopy (AFM-FS) has become a crucial technique to study the nanomechanical stability of SLBs by controlling the liquid media and the temperature variations. In this contribution, we review recent AFM and AFM-FS studies on the effect of Chol on the morphology and mechanical properties of model SLBs, including complex bilayers containing SLs. We also introduce a promising combination of AFM and X-ray (XR) techniques that allows for in situ characterization of dynamic processes, providing structural, morphological, and nanomechanical information. PMID:27999368
Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.
Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup
2017-02-01
We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.
Modeling of the radiation belt megnetosphere in decisional timeframes
Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.
2013-04-23
Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.
Generating a Multiphase Equation of State with Swarm Intelligence
NASA Astrophysics Data System (ADS)
Cox, Geoffrey
2017-06-01
Hydrocode calculations require knowledge of the variation of pressure of a material with density and temperature, which is given by the equation of state. An accurate model needs to account for discontinuities in energy, density and properties of a material across a phase boundary. When generating a multiphase equation of state the modeller attempts to balance the agreement between the available data for compression, expansion and phase boundary location. However, this can prove difficult because minor adjustments in the equation of state for a single phase can have a large impact on the overall phase diagram. Recently, Cox and Christie described a method for combining statistical-mechanics-based condensed matter physics models with a stochastic analysis technique called particle swarm optimisation. The models produced show good agreement with experiment over a wide range of pressure-temperature space. This talk details the general implementation of this technique, shows example results, and describes the types of analysis that can be performed with this method.
Biophysical interactions between plant and soil: theory and practice
NASA Astrophysics Data System (ADS)
van der Ploeg, Martine
2016-04-01
Vegetation plays an essential role in the hydrological cycle, as it regulates the water flux to the atmosphere through evapotranspiration, while it is dependent on adequate water supply. Vegetation shapes the land surface by changing infiltration characteristics as a result of root growth, and controls soil moisture storage, which in turn affect runoff characteristics and groundwater recharge. Vegetation and the underlying geology are in constant interaction, wherein water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. Models are a useful tool to explore interaction and feedbacks between vegetation, soil and landscape. Plants respond biochemically to their environment, while the models used for hydrology are often based on physical interactions. Gene-expression and genotype adaptation may complicate our modelling efforts in for example climate change impacts. Combination of new techniques to assess soil and plant properties facilitates assessment of biophysical interactions. This poster will review these techniques and compare the obtained insights of soil-plant relationships with the current modeling approaches.
SEPEM: A tool for statistical modeling the solar energetic particle environment
NASA Astrophysics Data System (ADS)
Crosby, Norma; Heynderickx, Daniel; Jiggens, Piers; Aran, Angels; Sanahuja, Blai; Truscott, Pete; Lei, Fan; Jacobs, Carla; Poedts, Stefaan; Gabriel, Stephen; Sandberg, Ingmar; Glover, Alexi; Hilgers, Alain
2015-07-01
Solar energetic particle (SEP) events are a serious radiation hazard for spacecraft as well as a severe health risk to humans traveling in space. Indeed, accurate modeling of the SEP environment constitutes a priority requirement for astrophysics and solar system missions and for human exploration in space. The European Space Agency's Solar Energetic Particle Environment Modelling (SEPEM) application server is a World Wide Web interface to a complete set of cross-calibrated data ranging from 1973 to 2013 as well as new SEP engineering models and tools. Both statistical and physical modeling techniques have been included, in order to cover the environment not only at 1 AU but also in the inner heliosphere ranging from 0.2 AU to 1.6 AU using a newly developed physics-based shock-and-particle model to simulate particle flux profiles of gradual SEP events. With SEPEM, SEP peak flux and integrated fluence statistics can be studied, as well as durations of high SEP flux periods. Furthermore, effects tools are also included to allow calculation of single event upset rate and radiation doses for a variety of engineering scenarios.
Thomas, Matthew A.; Mirus, Benjamin B.; Collins, Brian D.; Lu, Ning; Godt, Jonathan W.
2018-01-01
Rainfall-induced shallow landsliding is a persistent hazard to human life and property. Despite the observed connection between infiltration through the unsaturated zone and shallow landslide initiation, there is considerable uncertainty in how estimates of unsaturated soil-water retention properties affect slope stability assessment. This source of uncertainty is critical to evaluating the utility of physics-based hydrologic modeling as a tool for landslide early warning. We employ a numerical model of variably saturated groundwater flow parameterized with an ensemble of texture-, laboratory-, and field-based estimates of soil-water retention properties for an extensively monitored landslide-prone site in the San Francisco Bay Area, CA, USA. Simulations of soil-water content, pore-water pressure, and the resultant factor of safety show considerable variability across and within these different parameter estimation techniques. In particular, we demonstrate that with the same permeability structure imposed across all simulations, the variability in soil-water retention properties strongly influences predictions of positive pore-water pressure coincident with widespread shallow landsliding. We also find that the ensemble of soil-water retention properties imposes an order-of-magnitude and nearly two-fold variability in seasonal and event-scale landslide susceptibility, respectively. Despite the reduced factor of safety uncertainty during wet conditions, parameters that control the dry end of the soil-water retention function markedly impact the ability of a hydrologic model to capture soil-water content dynamics observed in the field. These results suggest that variability in soil-water retention properties should be considered for objective physics-based simulation of landslide early warning criteria.
The Kadomtsev{endash}Petviashvili equation as a source of integrable model equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maccari, A.
1996-12-01
A new integrable and nonlinear partial differential equation (PDE) in 2+1 dimensions is obtained, by an asymptotically exact reduction method based on Fourier expansion and spatiotemporal rescaling, from the Kadomtsev{endash}Petviashvili equation. The integrability property is explicitly demonstrated, by exhibiting the corresponding Lax pair, that is obtained by applying the reduction technique to the Lax pair of the Kadomtsev{endash}Petviashvili equation. This model equation is likely to be of applicative relevance, because it may be considered a consistent approximation of a large class of nonlinear evolution PDEs. {copyright} {ital 1996 American Institute of Physics.}
Multi-physics CFD simulations in engineering
NASA Astrophysics Data System (ADS)
Yamamoto, Makoto
2013-08-01
Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.
Arbabi, Vahid; Pouran, Behdad; Campoli, Gianni; Weinans, Harrie; Zadpoor, Amir A
2016-03-21
One of the most widely used techniques to determine the mechanical properties of cartilage is based on indentation tests and interpretation of the obtained force-time or displacement-time data. In the current computational approaches, one needs to simulate the indentation test with finite element models and use an optimization algorithm to estimate the mechanical properties of cartilage. The modeling procedure is cumbersome, and the simulations need to be repeated for every new experiment. For the first time, we propose a method for fast and accurate estimation of the mechanical and physical properties of cartilage as a poroelastic material with the aid of artificial neural networks. In our study, we used finite element models to simulate the indentation for poroelastic materials with wide combinations of mechanical and physical properties. The obtained force-time curves are then divided into three parts: the first two parts of the data is used for training and validation of an artificial neural network, while the third part is used for testing the trained network. The trained neural network receives the force-time curves as the input and provides the properties of cartilage as the output. We observed that the trained network could accurately predict the properties of cartilage within the range of properties for which it was trained. The mechanical and physical properties of cartilage could therefore be estimated very fast, since no additional finite element modeling is required once the neural network is trained. The robustness of the trained artificial neural network in determining the properties of cartilage based on noisy force-time data was assessed by introducing noise to the simulated force-time data. We found that the training procedure could be optimized so as to maximize the robustness of the neural network against noisy force-time data. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Whankhom, Prawit; Phusawisot, Pilanut; Sayankena, Patcharanon
2016-01-01
The aim of this research is to develop and verify the effectiveness of an instructional model of reading English strategies for students of Mahasarakham Institute of Physical Education in the Northeastern region through survey. Classroom action research techniques with the two groups of sample sizes of 34 sophomore physical students as a control…
Price, Owen; Baker, John; Bee, Penny; Lovell, Karina
2018-01-01
De-escalation techniques are recommended to manage violence and aggression in mental health settings yet restrictive practices continue to be frequently used. Barriers and enablers to the implementation and effectiveness of de-escalation techniques in practice are not well understood. To obtain staff descriptions of de-escalation techniques currently used in mental health settings and explore factors perceived to influence their implementation and effectiveness. Qualitative, semi-structured interviews and Framework Analysis. Five in-patient wards including three male psychiatric intensive care units, one female acute ward and one male acute ward in three UK Mental Health NHS Trusts. 20 ward-based clinical staff. Individual semi-structured interviews were digitally recorded, transcribed verbatim and analysed using a qualitative data analysis software package. Participants described 14 techniques used in response to escalated aggression applied on a continuum between support and control. Techniques along the support-control continuum could be classified in three groups: 'support' (e.g. problem-solving, distraction, reassurance) 'non-physical control' (e.g. reprimands, deterrents, instruction) and 'physical control' (e.g. physical restraint and seclusion). Charting the reasoning staff provided for technique selection against the described behavioural outcome enabled a preliminary understanding of staff, patient and environmental influences on de-escalation success or failure. Importantly, the more coercive 'non-physical control' techniques are currently conceptualised by staff as a feature of de-escalation techniques, yet, there was evidence of a link between these and increased aggression/use of restrictive practices. Risk was not a consistent factor in decisions to adopt more controlling techniques. Moral judgements regarding the function of the aggression; trial-and-error; ingrained local custom (especially around instruction to low stimulus areas); knowledge of the patient; time-efficiency and staff anxiety had a key role in escalating intervention. This paper provides a new model for understanding staff intervention in response to escalated aggression, a continuum between support and control. It further provides a preliminary explanatory framework for understanding the relationship between patient behaviour, staff response and environmental influences on de-escalation success and failure. This framework reveals potentially important behaviour change targets for interventions seeking to reduce violence and use of restrictive practices through enhanced de-escalation techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.
Agent independent task planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
Simulation of an Asynchronous Machine by using a Pseudo Bond Graph
NASA Astrophysics Data System (ADS)
Romero, Gregorio; Felez, Jesus; Maroto, Joaquin; Martinez, M. Luisa
2008-11-01
For engineers, computer simulation, is a basic tool since it enables them to understand how systems work without actually needing to see them. They can learn how they work in different circumstances and optimize their design with considerably less cost in terms of time and money than if they had to carry out tests on a physical system. However, if computer simulation is to be reliable it is essential for the simulation model to be validated. There is a wide range of commercial brands on the market offering products for electrical domain simulation (SPICE, LabVIEW PSCAD,Dymola, Simulink, Simplorer,...). These are powerful tools, but require the engineer to have a perfect knowledge of the electrical field. This paper shows an alternative methodology to can simulate an asynchronous machine using the multidomain Bond Graph technique and apply it in any program that permit the simulation of models based in this technique; no extraordinary knowledge of this technique and electric field are required to understand the process .
3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds
NASA Technical Reports Server (NTRS)
Leckey, C.; Hinders, M.
2011-01-01
Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Rojas, M.; Adamowski, J. F.; Gálvez, J.; Tuy, H. A.; Melgar-Quiñonez, H.
2015-12-01
While cropping models represent the biophysical aspects of agricultural systems, system dynamics modelling offers the possibility of representing the socioeconomic (including social and cultural) aspects of these systems. The two types of models can then be coupled in order to include the socioeconomic dimensions of climate change adaptation in the predictions of cropping models.We develop a dynamically coupled socioeconomic-biophysical model of agricultural production and its repercussions on food security in two case studies from Guatemala (a market-based, intensive agricultural system and a low-input, subsistence crop-based system). Through the specification of the climate inputs to the cropping model, the impacts of climate change on the entire system can be analysed, and the participatory nature of the system dynamics model-building process, in which stakeholders from NGOs to local governmental extension workers were included, helps ensure local trust in and use of the model.However, the analysis of climate variability's impacts on agroecosystems includes uncertainty, especially in the case of joint physical-socioeconomic modelling, and the explicit representation of this uncertainty in the participatory development of the models is important to ensure appropriate use of the models by the end users. In addition, standard model calibration, validation, and uncertainty interval estimation techniques used for physically-based models are impractical in the case of socioeconomic modelling. We present a methodology for the calibration and uncertainty analysis of coupled biophysical (cropping) and system dynamics (socioeconomic) agricultural models, using survey data and expert input to calibrate and evaluate the uncertainty of the system dynamics as well as of the overall coupled model. This approach offers an important tool for local decision makers to evaluate the potential impacts of climate change and their feedbacks through the associated socioeconomic system.
Coarse Grid CFD for underresolved simulation
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.
2010-11-01
CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf
Coupled two-dimensional edge plasma and neutral gas modeling of tokamak scrape-off-layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maingi, Rajesh
1992-08-01
The objective of this study is to devise a detailed description of the tokamak scrape-off-layer (SOL), which includes the best available models of both the plasma and neutral species and the strong coupling between the two in many SOL regimes. A good estimate of both particle flux and heat flux profiles at the limiter/divertor target plates is desired. Peak heat flux is one of the limiting factors in determining the survival probability of plasma-facing-components at high power levels. Plate particle flux affects the neutral flux to the pump, which determines the particle exhaust rate. A technique which couples a two-dimensionalmore » (2-D) plasma and a 2-D neutral transport code has been developed (coupled code technique), but this procedure requires large amounts of computer time. Relevant physics has been added to an existing two-neutral-species model which takes the SOL plasma/neutral coupling into account in a simple manner (molecular physics model), and this model is compared with the coupled code technique mentioned above. The molecular physics model is benchmarked against experimental data from a divertor tokamak (DIII-D), and a similar model (single-species model) is benchmarked against data from a pump-limiter tokamak (Tore Supra). The models are then used to examine two key issues: free-streaming-limits (ion energy conduction and momentum flux) and the effects of the non-orthogonal geometry of magnetic flux surfaces and target plates on edge plasma parameter profiles.« less
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Dwyer, John L.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.
NASA Astrophysics Data System (ADS)
Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.
2016-06-01
The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.
A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.
Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J
2018-02-01
This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.
Specification of the Surface Charging Environment with SHIELDS
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, J. D.; Vernon, L.; Woodroffe, J. R.; Brito, T.; Toth, G.; Welling, D. T.; Yu, Y.; Albert, J.; Birn, J.; Borovsky, J.; Denton, M.; Horne, R. B.; Lemon, C.; Markidis, S.; Thomsen, M. F.; Young, S. L.
2016-12-01
Predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure, i.e. "space weather", remains a big space physics challenge. A recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- and microscale. Important physics questions related to rapid particle injection and acceleration associated with magnetospheric storms and substorms as well as plasma waves are investigated. These challenging problems are addressed using a team of world-class experts in the fields of space science and computational plasma physics, and state-of-the-art models and computational facilities. In addition to physics-based models (like RAM-SCB, BATS-R-US, and iPIC3D), new data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed. Simulations with the SHIELDS framework of the near-Earth space environment where operational satellites reside are presented. Further model development and the organization of a "Spacecraft Charging Environment Challenge" by the SHIELDS project at LANL in collaboration with the NSF Geospace Environment Modeling (GEM) Workshop and the multi-agency Community Coordinated Modeling Center (CCMC) to assess the accuracy of SCE predictions are discussed.
Winter, Sandra J; Sheats, Jylana L; King, Abby C
2016-01-01
This review examined the use of health behavior change techniques and theory in technology-enabled interventions targeting risk factors and indicators for cardiovascular disease (CVD) prevention and treatment. Articles targeting physical activity, weight loss, smoking cessation and management of hypertension, lipids and blood glucose were sourced from PubMed (November 2010-2015) and coded for use of 1) technology, 2) health behavior change techniques (using the CALO-RE taxonomy), and 3) health behavior theories. Of the 984 articles reviewed, 304 were relevant (240=intervention, 64=review). Twenty-two different technologies were used (M=1.45, SD=+/-0.719). The most frequently used behavior change techniques were self-monitoring and feedback on performance (M=5.4, SD=+/-2.9). Half (52%) of the intervention studies named a theory/model - most frequently Social Cognitive Theory, the Trans-theoretical Model, and the Theory of Planned Behavior/Reasoned Action. To optimize technology-enabled interventions targeting CVD risk factors, integrated behavior change theories that incorporate a variety of evidence-based health behavior change techniques are needed. Copyright © 2016 Elsevier Inc. All rights reserved.
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Thompson, L.G.
1997-02-01
Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
NASA Astrophysics Data System (ADS)
Kavcar, Nevzat; Özen, Ali Ihsan
2017-02-01
Purpose of this work is to determine the physics teacher candidates' views on Physics 11 textbook' content and general properties suitable to the 2013 Secondary School Physics Curriculum. 24 teacher candidates at 2015-2016 school year constituted the sampling of the study in which scanning model based on qualitative research technique was used by performing document analysis. Data collection tool of the research was the files prepared with 51 and 28 open ended questions including the subject content and general properties of the textbook. It was concluded that the textbook was sufficient in terms of discussion, investigation, daily life context, visual elements, permanent learning traces; but was insufficient for design elements and being only one project in Electricity and Magnetism unit. Affective area activities may be involved in the textbook, there may be teacher guide book and book' teaching packet, and underline issues and qualification of the textbook may be improved.
NASA Astrophysics Data System (ADS)
Didiş Körhasan, Nilüfer; Eryılmaz, Ali; Erkoç, Şakir
2016-01-01
Mental models are coherently organized knowledge structures used to explain phenomena. They interact with social environments and evolve with the interaction. Lacking daily experience with phenomena, the social interaction gains much more importance. In this part of our multiphase study, we investigate how instructional interactions influenced students’ mental models about the quantization of physical observables. Class observations and interviews were analysed by studying students’ mental models constructed in a modern physics course during an academic semester. The research revealed that students’ mental models were influenced by (1) the manner of teaching, including instructional methodologies and content specific techniques used by the instructor, (2) order of the topics and familiarity with concepts, and (3) peers.
A Simulation and Modeling Framework for Space Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olivier, S S
This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellitemore » intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated.« less
Forecasting Lightning Threat using Cloud-resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.
2009-01-01
As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.
NASA Astrophysics Data System (ADS)
Hansen, U.; Rodgers, S.; Jensen, K. F.
2000-07-01
A general method for modeling ionized physical vapor deposition is presented. As an example, the method is applied to growth of an aluminum film in the presence of an ionized argon flux. Molecular dynamics techniques are used to examine the surface adsorption, reflection, and sputter reactions taking place during ionized physical vapor deposition. We predict their relative probabilities and discuss their dependence on energy and incident angle. Subsequently, we combine the information obtained from molecular dynamics with a line of sight transport model in a two-dimensional feature, incorporating all effects of reemission and resputtering. This provides a complete growth rate model that allows inclusion of energy- and angular-dependent reaction rates. Finally, a level-set approach is used to describe the morphology of the growing film. We thus arrive at a computationally highly efficient and accurate scheme to model the growth of thin films. We demonstrate the capabilities of the model predicting the major differences on Al film topographies between conventional and ionized sputter deposition techniques studying thin film growth under ionized physical vapor deposition conditions with different Ar fluxes.
NASA Astrophysics Data System (ADS)
Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter
2014-03-01
As part of a project to predict the full-field dynamic strain in rotating structures (e.g. wind turbines and helicopter blades), an experimental measurement was performed on a wind turbine attached to a 500-lb steel block and excited using a mechanical shaker. In this paper, the dynamic displacement of several optical targets mounted to a turbine placed in a semi-built-in configuration was measured by using three-dimensional point tracking. Using an expansion algorithm in conjunction with a finite element model of the blades, the measured displacements were expanded to all finite element degrees of freedom. The calculated displacements were applied to the finite element model to extract dynamic strain on the surface as well as within the interior points of the structure. To validate the technique for dynamic strain prediction, the physical strain at eight locations on the blades was measured during excitation using strain-gages. The expansion was performed by using both structural modes of an individual cantilevered blade and using modes of the entire structure (three-bladed wind turbine and the fixture) and the predicted strain was compared to the physical strain-gage measurements. The results demonstrate the ability of the technique to predict full-field dynamic strain from limited sets of measurements and can be used as a condition based monitoring tool to help provide damage prognosis of structures during operation.
Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria
ERIC Educational Resources Information Center
Emetere, Moses E.
2014-01-01
This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…
The analysis of cable forces based on natural frequency
NASA Astrophysics Data System (ADS)
Suangga, Made; Hidayat, Irpan; Juliastuti; Bontan, Darwin Julius
2017-12-01
A cable is a flexible structural member that is effective at resisting tensile forces. Cables are used in a variety of structures that employ their unique characteristics to create efficient design tension members. The condition of the cable forces in the cable supported structure is an important indication of judging whether the structure is in good condition. Several methods have been developed to measure on site cable forces. Vibration technique using correlation between natural frequency and cable forces is a simple method to determine in situ cable forces, however the method need accurate information on the boundary condition, cable mass, and cable length. The natural frequency of the cable is determined using FFT (Fast Fourier Transform) Technique to the acceleration record of the cable. Based on the natural frequency obtained, the cable forces then can be determine by analytical or by finite element program. This research is focus on the vibration techniques to determine the cable forces, to understand the physical parameter effect of the cable and also modelling techniques to the natural frequency and cable forces.
Dry coating of solid dosage forms: an overview of processes and applications.
Foppoli, Anastasia Anna; Maroni, Alessandra; Cerea, Matteo; Zema, Lucia; Gazzaniga, Andrea
2017-12-01
Dry coating techniques enable manufacturing of coated solid dosage forms with no, or very limited, use of solvents. As a result, major drawbacks associated with both organic solvents and aqueous coating systems can be overcome, such as toxicological, environmental, and safety-related issues on the one hand as well as costly drying phases and impaired product stability on the other. The considerable advantages related to solventless coating has been prompting a strong research interest in this field of pharmaceutics. In the article, processes and applications relevant to techniques intended for dry coating are analyzed and reviewed. Based on the physical state of the coat-forming agents, liquid- and solid-based techniques are distinguished. The former include hot-melt coating and coating by photocuring, while the latter encompass press coating and powder coating. Moreover, solventless techniques, such as injection molding and three-dimensional printing by fused deposition modeling, which are not purposely conceived for coating, are also discussed in that they would open new perspectives in the manufacturing of coated-like dosage forms.
A reinforcement learning-based architecture for fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusumawati, Intan, E-mail: intankusumawati10@gmail.com; Marwoto, Putut, E-mail: pmarwoto@yahoo.com; Linuwih, Suharto, E-mail: suhartolinuwih@gmail.com
The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative–quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment ofmore » learning by observation sheet PBM–learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.« less
Comparative physical mapping between wheat chromosome arm 2BL and rice chromosome 4.
Lee, Tong Geon; Lee, Yong Jin; Kim, Dae Yeon; Seo, Yong Weon
2010-12-01
Physical maps of chromosomes provide a framework for organizing and integrating diverse genetic information. DNA microarrays are a valuable technique for physical mapping and can also be used to facilitate the discovery of single feature polymorphisms (SFPs). Wheat chromosome arm 2BL was physically mapped using a Wheat Genome Array onto near-isogenic lines (NILs) with the aid of wheat-rice synteny and mapped wheat EST information. Using high variance probe set (HVP) analysis, 314 HVPs constituting genes present on 2BL were identified. The 314 HVPs were grouped into 3 categories: HVPs that match only rice chromosome 4 (298 HVPs), those that match only wheat ESTs mapped on 2BL (1), and those that match both rice chromosome 4 and wheat ESTs mapped on 2BL (15). All HVPs were converted into gene sets, which represented either unique rice gene models or mapped wheat ESTs that matched identified HVPs. Comparative physical maps were constructed for 16 wheat gene sets and 271 rice gene sets. Of the 271 rice gene sets, 257 were mapped to the 18-35 Mb regions on rice chromosome 4. Based on HVP analysis and sequence similarity between the gene models in the rice chromosomes and mapped wheat ESTs, the outermost rice gene model that limits the translocation breakpoint to orthologous regions was identified.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Joseph, Rodney P.; Cherrington, Andrea; Cuffee, Yendelela; Knight, BernNadette; Lewis, Dwight; Allison, Jeroan J.
2014-01-01
Introduction Innovative approaches are needed to promote physical activity among young adult overweight and obese African American women. We sought to describe key elements that African American women desire in a culturally relevant Internet-based tool to promote physical activity among overweight and obese young adult African American women. Methods A mixed-method approach combining nominal group technique and traditional focus groups was used to elicit recommendations for the development of an Internet-based physical activity promotion tool. Participants, ages 19 to 30 years, were enrolled in a major university. Nominal group technique sessions were conducted to identify themes viewed as key features for inclusion in a culturally relevant Internet-based tool. Confirmatory focus groups were conducted to verify and elicit more in-depth information on the themes. Results Twenty-nine women participated in nominal group (n = 13) and traditional focus group sessions (n = 16). Features that emerged to be included in a culturally relevant Internet-based physical activity promotion tool were personalized website pages, diverse body images on websites and in videos, motivational stories about physical activity and women similar to themselves in size and body shape, tips on hair care maintenance during physical activity, and online social support through social media (eg, Facebook, Twitter). Conclusion Incorporating existing social media tools and motivational stories from young adult African American women in Internet-based tools may increase the feasibility, acceptability, and success of Internet-based physical activity programs in this high-risk, understudied population. PMID:24433625
Torque limit of PM motors for field-weakening region operation
Royak, Semyon [Beachwood, OH; Harbaugh, Mark M [Richfield, OH
2012-02-14
The invention includes a motor controller and technique for controlling a permanent magnet motor. In accordance with one aspect of the present technique, a permanent magnet motor is controlled by receiving a torque command, determining a physical torque limit based on a stator frequency, determining a theoretical torque limit based on a maximum available voltage and motor inductance ratio, and limiting the torque command to the smaller of the physical torque limit and the theoretical torque limit. Receiving the torque command may include normalizing the torque command to obtain a normalized torque command, determining the physical torque limit may include determining a normalized physical torque limit, determining a theoretical torque limit may include determining a normalized theoretical torque limit, and limiting the torque command may include limiting the normalized torque command to the smaller of the normalized physical torque limit and the normalized theoretical torque limit.
Sylvester, B.D.; Zammit, K.; Fong, A.J.; Sabiston, C.M.
2017-01-01
Background Cancer centre Web sites can be a useful tool for distributing information about the benefits of physical activity for breast cancer (bca) survivors, and they hold potential for supporting health behaviour change. However, the extent to which cancer centre Web sites use evidence-based behaviour change techniques to foster physical activity behaviour among bca survivors is currently unknown. The aim of our study was to evaluate the presentation of behaviour-change techniques on Canadian cancer centre Web sites to promote physical activity behaviour for bca survivors. Methods All Canadian cancer centre Web sites (n = 39) were evaluated by two raters using the Coventry, Aberdeen, and London–Refined (calo-re) taxonomy of behaviour change techniques and the eEurope 2002 Quality Criteria for Health Related Websites. Descriptive statistics were calculated. Results The most common behaviour change techniques used on Web sites were providing information about consequences in general (80%), suggesting goal-setting behaviour (56%), and planning social support or social change (46%). Overall, Canadian cancer centre Web sites presented an average of M = 6.31 behaviour change techniques (of 40 that were coded) to help bca survivors increase their physical activity behaviour. Evidence of quality factors ranged from 90% (sites that provided evidence of readability) to 0% (sites that provided an editorial policy). Conclusions Our results provide preliminary evidence that, of 40 behaviour-change techniques that were coded, fewer than 20% were used to promote physical activity behaviour to bca survivors on cancer centre Web sites, and that the most effective techniques were inconsistently used. On cancer centre Web sites, health promotion specialists could focus on emphasizing knowledge mobilization efforts using available research into behaviour-change techniques to help bca survivors increase their physical activity. PMID:29270056
Sylvester, B D; Zammit, K; Fong, A J; Sabiston, C M
2017-12-01
Cancer centre Web sites can be a useful tool for distributing information about the benefits of physical activity for breast cancer (bca) survivors, and they hold potential for supporting health behaviour change. However, the extent to which cancer centre Web sites use evidence-based behaviour change techniques to foster physical activity behaviour among bca survivors is currently unknown. The aim of our study was to evaluate the presentation of behaviour-change techniques on Canadian cancer centre Web sites to promote physical activity behaviour for bca survivors. All Canadian cancer centre Web sites ( n = 39) were evaluated by two raters using the Coventry, Aberdeen, and London-Refined (calo-re) taxonomy of behaviour change techniques and the eEurope 2002 Quality Criteria for Health Related Websites. Descriptive statistics were calculated. The most common behaviour change techniques used on Web sites were providing information about consequences in general (80%), suggesting goal-setting behaviour (56%), and planning social support or social change (46%). Overall, Canadian cancer centre Web sites presented an average of M = 6.31 behaviour change techniques (of 40 that were coded) to help bca survivors increase their physical activity behaviour. Evidence of quality factors ranged from 90% (sites that provided evidence of readability) to 0% (sites that provided an editorial policy). Our results provide preliminary evidence that, of 40 behaviour-change techniques that were coded, fewer than 20% were used to promote physical activity behaviour to bca survivors on cancer centre Web sites, and that the most effective techniques were inconsistently used. On cancer centre Web sites, health promotion specialists could focus on emphasizing knowledge mobilization efforts using available research into behaviour-change techniques to help bca survivors increase their physical activity.
Wood and Wood-Based Materials as Sensors—A Review of the Piezoelectric Effect in Wood
Robert J. Ross; Jiangming Kan; Xiping Wang; Julie Blankenburg; Janet I. Stockhausen; Roy F. Pellerin
2012-01-01
A variety of techniques have been investigated for use in assessing the physical and mechanical properties of wood products and structures. Ultrasound, transverse vibration, and stress-wave based methods are all techniques that have shown promise for many nondestructive evaluation applications. These techniques and others rely on the use of measurement systems to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, J.H.
1984-05-01
Model design, implementation and quality assurance procedures can have a significant impact on the effectiveness of long term utility of any modeling approach. The Regional Oxidant Modeling System (ROMS) is exceptionally complex because it treats all chemical and physical processes thought to affect ozone concentration on a regional scale. Thus, to effectively illustrate useful design and implementation techniques, this paper describes the general modeling framework which forms the basis of the ROMS. This framework is flexible enough to allow straightforward update or replacement of the chemical kinetics mechanism and/or any theoretical formulations of the physical processes. Use of the Jacksonmore » Structured Programming (JSP) method to implement this modeling framework has not only increased programmer productivity and quality of the resulting programs, but also has provided standardized program design, dynamic documentation, and easily maintainable and transportable code. A summary of the JSP method is presented to encourage modelers to pursue this technique in their own model development efforts. In addition, since data preparation is such an integral part of a successful modeling system, the ROMS processor network is described with emphasis on the internal quality control techniques.« less
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Nian, Qiong; Callahan, Michael; Saei, Mojib; Look, David; Efstathiadis, Harry; Bailey, John; Cheng, Gary J.
2015-01-01
A new method combining aqueous solution printing with UV Laser crystallization (UVLC) and post annealing is developed to deposit highly transparent and conductive Aluminum doped Zinc Oxide (AZO) films. This technique is able to rapidly produce large area AZO films with better structural and optoelectronic properties than most high vacuum deposition, suggesting a potential large-scale manufacturing technique. The optoelectronic performance improvement attributes to UVLC and forming gas annealing (FMG) induced grain boundary density decrease and electron traps passivation at grain boundaries. The physical model and computational simulation developed in this work could be applied to thermal treatment of many other metal oxide films. PMID:26515670
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
Xu, Shidong; Sun, Guanghui; Sun, Weichao
2017-01-01
In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
Gene therapy progress and prospects: magnetic nanoparticle-based gene delivery.
Dobson, J
2006-02-01
The recent emphasis on the development of non-viral transfection agents for gene delivery has led to new physics and chemistry-based techniques, which take advantage of charge interactions and energetic processes. One of these techniques which shows much promise for both in vitro and in vivo transfection involves the use of biocompatible magnetic nanoparticles for gene delivery. In these systems, therapeutic or reporter genes are attached to magnetic nanoparticles, which are then focused to the target site/cells via high-field/high-gradient magnets. The technique promotes rapid transfection and, as more recent work indicates, excellent overall transfection levels as well. The advantages and difficulties associated with magnetic nanoparticle-based transfection will be discussed as will the underlying physical principles, recent studies and potential future applications.
Distillation and Air Stripping Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange, Kevin E.; Conger, Bruce; Anderson, Molly
2009-01-01
Air stripping and distillation are two different gravity-based methods, which may be applied to the purification of wastewater on the lunar base. These gravity-based solutions to water processing are robust physical separation techniques, which may be advantageous to many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation models and air stripping models. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for the for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Distillation processes are modeled separately and in tandem with air stripping to demonstrate the potential effectiveness and utility of these methods in recycling wastewater on the Moon. Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams. Components of the wastewater streams are ranked by Henry s Law Constant and the suitability of air stripping in the purification of wastewater in terms of component removal is evaluated. Scaling factors for distillation and air stripping columns are presented to account for the difference in the lunar gravitation environment. Commercially available distillation and air stripping units which are considered suitable for Exploration Life Support are presented. The advantages to the various designs are summarized with respect to water purity levels, power consumption, and processing rates.
Zhai, Peng-Wang; Hu, Yongxiang; Trepte, Charles R; Lucker, Patricia L
2009-02-16
A vector radiative transfer model has been developed for coupled atmosphere and ocean systems based on the Successive Order of Scattering (SOS) Method. The emphasis of this study is to make the model easy-to-use and computationally efficient. This model provides the full Stokes vector at arbitrary locations which can be conveniently specified by users. The model is capable of tracking and labeling different sources of the photons that are measured, e.g. water leaving radiances and reflected sky lights. This model also has the capability to separate florescence from multi-scattered sunlight. The delta - fit technique has been adopted to reduce computational time associated with the strongly forward-peaked scattering phase matrices. The exponential - linear approximation has been used to reduce the number of discretized vertical layers while maintaining the accuracy. This model is developed to serve the remote sensing community in harvesting physical parameters from multi-platform, multi-sensor measurements that target different components of the atmosphere-oceanic system.
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2014-05-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.
Material Encounters with Mathematics: The Case for Museum Based Cross-Curricular Integration
ERIC Educational Resources Information Center
de Freitas, Elizabeth; Bentley, Sean J.
2012-01-01
This paper reports on research from a network of high school and museum partnerships designed to explore techniques for integrating mathematics and physics learning experiences during the first year of high school. The foundation of the curriculum is a problem-based, museum-based, and hands-on approach to mathematics and physics. In this paper, we…
ERIC Educational Resources Information Center
Koka, Andre
2017-01-01
This study examined the effectiveness of a brief theory-based intervention on muscular strength among adolescents in a physical education setting. The intervention adopted a process-based mental simulation technique. The self-reported frequency of practising for and actual levels of abdominal muscular strength/endurance as one component of…
NASA Astrophysics Data System (ADS)
Brezeanu, G.; Pristavu, G.; Draghici, F.; Badila, M.; Pascu, R.
2017-08-01
In this paper, a characterization technique for 4H-SiC Schottky diodes with varying levels of metal-semiconductor contact inhomogeneity is proposed. A macro-model, suitable for high-temperature evaluation of SiC Schottky contacts, with discrete barrier height non-uniformity, is introduced in order to determine the temperature interval and bias domain where electrical behavior of the devices can be described by the thermionic emission theory (has a quasi-ideal performance). A minimal set of parameters, the effective barrier height and peff, the non-uniformity factor, is associated. Model-extracted parameters are discussed in comparison with literature-reported results based on existing inhomogeneity approaches, in terms of complexity and physical relevance. Special consideration was given to models based on a Gaussian distribution of barrier heights on the contact surface. The proposed methodology is validated by electrical characterization of nickel silicide Schottky contacts on silicon carbide (4H-SiC), where a discrete barrier distribution can be considered. The same method is applied to inhomogeneous Pt/4H-SiC contacts. The forward characteristics measured at different temperatures are accurately reproduced using this inhomogeneous barrier model. A quasi-ideal behavior is identified for intervals spanning 200 °C for all measured Schottky samples, with Ni and Pt contact metals. A predictable exponential current-voltage variation over at least 2 orders of magnitude is also proven, with a stable barrier height and effective area for temperatures up to 400 °C. This application-oriented characterization technique is confirmed by using model parameters to fit a SiC-Schottky high temperature sensor's response.
Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves
NASA Astrophysics Data System (ADS)
Wattanakasiwich, P.; Ananta, S.
2010-07-01
In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.
DAWN (Design Assistant Workstation) for advanced physical-chemical life support systems
NASA Technical Reports Server (NTRS)
Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.
1989-01-01
This paper reports the results of a project supported by the National Aeronautics and Space Administration, Office of Aeronautics and Space Technology (NASA-OAST) under the Advanced Life Support Development Program. It is an initial attempt to integrate artificial intelligence techniques (via expert systems) with conventional quantitative modeling tools for advanced physical-chemical life support systems. The addition of artificial intelligence techniques will assist the designer in the definition and simulation of loosely/well-defined life support processes/problems as well as assist in the capture of design knowledge, both quantitative and qualitative. Expert system and conventional modeling tools are integrated to provide a design workstation that assists the engineer/scientist in creating, evaluating, documenting and optimizing physical-chemical life support systems for short-term and extended duration missions.
The impulsive hard X-rays from solar flares
NASA Technical Reports Server (NTRS)
Leach, J.
1984-01-01
A technique for determining the physical arrangement of a solar flare during the impulsive phase was developed based upon a nonthermal model interpretation of the emitted hard X-rays. Accurate values are obtained for the flare parameters, including those which describe the magnetic field structure and the beaming of the energetic electrons, parameters which have hitherto been mostly inaccessible. The X-ray intensity height structure can be described readily with a single expression based upon a semi-empirical fit to the results from many models. Results show that the degree of linear polarization of the X-rays from a flaring loop does not exceed 25 percent and can easily and naturally be as low as the polarization expected from a thermal model. This is a highly significant result in that it supersedes those based upon less thorough calculations of the electron beam dynamics and requires that a reevaluation of hopes of using polarization measurements to discriminate between categories of flare models.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
Forecast of geomagnetic storms using CME parameters and the WSA-ENLIL model
NASA Astrophysics Data System (ADS)
Moon, Y.; Lee, J.; Jang, S.; Na, H.; Lee, J.
2013-12-01
Intense geomagnetic storms are caused by coronal mass ejections (CMEs) from the Sun and their forecast is quite important in protecting space- and ground-based technological systems. The onset and strength of geomagnetic storms depend on the kinematic and magnetic properties of CMEs. Current forecast techniques mostly use solar wind in-situ measurements that provide only a short lead time. On the other hand, techniques using CME observations near the Sun have the potential to provide 1-3 days of lead time before the storm occurs. Therefore, one of the challenging issues is to forecast interplanetary magnetic field (IMF) southward components and hence geomagnetic storm strength with a lead-time on the order of 1-3 days. We are going to answer the following three questions: (1) when does a CME arrive at the Earth? (2) what is the probability that a CME can induce a geomagnetic storm? and (3) how strong is the storm? To address the first question, we forecast the arrival time and other physical parameters of CMEs at the Earth using the WSA-ENLIL model with three CME cone types. The second question is answered by examining the geoeffective and non-geoeffective CMEs depending on CME observations (speed, source location, earthward direction, magnetic field orientation, and cone-model output). The third question is addressed by examining the relationship between CME parameters and geomagnetic indices (or IMF southward component). The forecast method will be developed with a three-stage approach, which will make a prediction within four hours after the solar coronagraph data become available. We expect that this study will enable us to forecast the onset and strength of a geomagnetic storm a few days in advance using only CME parameters and the physics-based models.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Salerno, John; Kadar, Ivan; Yang, Shanchieh J.; Fenstermacher, Laurie; Endsley, Mica; Grewe, Lynne
2013-05-01
During the SPIE 2012 conference, panelists convened to discuss "Real world issues and challenges in Human Social/Cultural/Behavioral modeling with Applications to Information Fusion." Each panelist presented their current trends and issues. The panel had agreement on advanced situation modeling, working with users for situation awareness and sense-making, and HSCB context modeling in focusing research activities. Each panelist added different perspectives based on the domain of interest such as physical, cyber, and social attacks from which estimates and projections can be forecasted. Also, additional techniques were addressed such as interest graphs, network modeling, and variable length Markov Models. This paper summarizes the panelists discussions to highlight the common themes and the related contrasting approaches to the domains in which HSCB applies to information fusion applications.
Presentation of growth velocities of rural Haitian children using smoothing spline techniques.
Waternaux, C; Hebert, J R; Dawson, R; Berggren, G G
1987-01-01
The examination of monthly (or quarterly) increments in weight or length is important for assessing the nutritional and health status of children. Growth velocities are widely thought to be more important than actual weight or length measurements per se. However, there are no standards by which clinicians, researchers, or parents can gauge a child's growth. This paper describes a method for computing growth velocities (monthly increments) for physical growth measurements with substantial measurement error and irregular spacing over time. These features are characteristic of data collected in the field where conditions are less than ideal. The technique of smoothing by splines provides a powerful tool to deal with the variability and irregularity of the measurements. The technique consists of approximating the observed data by a smooth curve as a clinician might have drawn on the child's growth chart. Spline functions are particularly appropriate to describe bio-physical processes such as growth, for which no model can be postulated a priori. This paper describes how the technique was used for the analysis of a large data base collected on pre-school aged children in rural Haiti. The sex-specific length and weight velocities derived from the spline-smoothed data are presented as reference data for researchers and others interested in longitudinal growth of children in the Third World.
NASA Astrophysics Data System (ADS)
Clegg, Richard A.; Hayhurst, Colin J.
1999-06-01
Ceramic materials, including glass, are commonly used as ballistic protection materials. The response of a ceramic to impact, perforation and penetration is complex and difficult and/or expensive to instrument for obtaining detailed physical data. This paper demonstrates how a hydrocode, such as AUTODYN, can be used to aid in the understanding of the response of brittle materials to high pressure impact loading and thus promote an efficient and cost effective design process. Hydrocode simulations cannot be made without appropriate characterisation of the material. Because of the complexitiy of the response of ceramic materials this often requires a number of complex material tests. Here we present a methodology for using the results of flyer plate tests, in conjunction with numerical simulations, to derive input to the Johnson-Holmquist material model for ceramics. Most of the research effort in relation to the development of hydrocode material models for ceramics has concentrated on the material behaviour under compression and shear. While the penetration process is dominated by these aspects of the material response, the final damaged state of the material can be significantly influenced by the tensile behaviour. Modelling of the final damage state is important since this is often the only physical information which is available. In this paper we present a unique implementation, in a hydrocode, for improved modelling of brittle materials in the tensile regime. Tensile failure initiation is based on any combination of principal stress or strain while the post-failure tensile response of the material is controlled through a Rankine plasticity damaging failure surface. The tensile failure surface can be combined with any of the traditional plasticity and/or compressive damage models. Finally, the models and data are applied in both traditional grid based Lagrangian and Eulerian solution techniques and the relativley new SPH (Smooth Particle Hydrodynamics) meshless technique. Simulations of long rod impacts onto ceramic faced armour and hypervelocity impacts on glass solar array space structures are presented and compared with experiments.
A physics-based model for maintenance of the pH gradient in the gastric mucus layer.
Lewis, Owen L; Keener, James P; Fogelson, Aaron L
2017-12-01
It is generally accepted that the gastric mucus layer provides a protective barrier between the lumen and the mucosa, shielding the mucosa from acid and digestive enzymes and preventing autodigestion of the stomach epithelium. However, the precise mechanisms that contribute to this protective function are still up for debate. In particular, it is not clear what physical processes are responsible for transporting hydrogen protons, secreted within the gastric pits, across the mucus layer to the lumen without acidifying the environment adjacent to the epithelium. One hypothesis is that hydrogen may be bound to the mucin polymers themselves as they are convected away from the mucosal surface and eventually degraded in the stomach lumen. It is also not clear what mechanisms prevent hydrogen from diffusing back toward the mucosal surface, thereby lowering the local pH. In this work we investigate a physics-based model of ion transport within the mucosal layer based on a Nernst-Planck-like equation. Analysis of this model shows that the mechanism of transporting protons bound to the mucus gel is capable of reproducing the trans-mucus pH gradients reported in the literature. Furthermore, when coupled with ion exchange at the epithelial surface, our analysis shows that bicarbonate secretion alone is capable of neutralizing the epithelial pH, even in the face of enormous diffusive gradients of hydrogen. Maintenance of the pH gradient is found to be robust to a wide array of perturbations in both physiological and phenomenological model parameters, suggesting a robust physiological control mechanism. NEW & NOTEWORTHY This work combines modeling techniques based on physical principles, as well as novel numerical simulations to test the plausibility of one hypothesized mechanism for proton transport across the gastric mucus layer. Results show that this mechanism is able to maintain the extreme pH gradient seen in in vivo experiments and suggests a highly robust regulation mechanism to maintain this gradient in the face of dynamic lumen composition. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.
2016-05-01
This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.
Linking the proteins--elucidation of proteome-scale networks using mass spectrometry.
Pflieger, Delphine; Gonnet, Florence; de la Fuente van Bentem, Sergio; Hirt, Heribert; de la Fuente, Alberto
2011-01-01
Proteomes are intricate. Typically, thousands of proteins interact through physical association and post-translational modifications (PTMs) to give rise to the emergent functions of cells. Understanding these functions requires one to study proteomes as "systems" rather than collections of individual protein molecules. The abstraction of the interacting proteome to "protein networks" has recently gained much attention, as networks are effective representations, that lose specific molecular details, but provide the ability to see the proteome as a whole. Mostly two aspects of the proteome have been represented by network models: proteome-wide physical protein-protein-binding interactions organized into Protein Interaction Networks (PINs), and proteome-wide PTM relations organized into Protein Signaling Networks (PSNs). Mass spectrometry (MS) techniques have been shown to be essential to reveal both of these aspects on a proteome-wide scale. Techniques such as affinity purification followed by MS have been used to elucidate protein-protein interactions, and MS-based quantitative phosphoproteomics is critical to understand the structure and dynamics of signaling through the proteome. We here review the current state-of-the-art MS-based analytical pipelines for the purpose to characterize proteome-scale networks. Copyright © 2010 Wiley Periodicals, Inc.
Spin formalism and applications to new physics searches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, H.E.
1994-12-01
An introduction to spin techniques in particle physics is given. Among the topics covered are: helicity formalism and its applications to the decay and scattering of spin-1/2 and spin-1 particles, techniques for evaluating helicity amplitudes (including projection operator methods and the spinor helicity method), and density matrix techniques. The utility of polarization and spin correlations for untangling new physics beyond the Standard Model at future colliders such as the LHC and a high energy e{sup +}e{sup {minus}} linear collider is then considered. A number of detailed examples are explored including the search for low-energy supersymmetry, a non-minimal Higgs boson sector,more » and new gauge bosons beyond the W{sup {+-}} and Z.« less
Multiscale simulation of molecular processes in cellular environments.
Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone
2016-11-13
We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling
NASA Astrophysics Data System (ADS)
Moore, Chandler; Akiki, Georges; Balachandar, S.
2017-11-01
This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.
The Practicality of Statistical Physics Handout Based on KKNI and the Constructivist Approach
NASA Astrophysics Data System (ADS)
Sari, S. Y.; Afrizon, R.
2018-04-01
Statistical physics lecture shows that: 1) the performance of lecturers, social climate, students’ competence and soft skills needed at work are in enough category, 2) students feel difficulties in following the lectures of statistical physics because it is abstract, 3) 40.72% of students needs more understanding in the form of repetition, practice questions and structured tasks, and 4) the depth of statistical physics material needs to be improved gradually and structured. This indicates that learning materials in accordance of The Indonesian National Qualification Framework or Kerangka Kualifikasi Nasional Indonesia (KKNI) with the appropriate learning approach are needed to help lecturers and students in lectures. The author has designed statistical physics handouts which have very valid criteria (90.89%) according to expert judgment. In addition, the practical level of handouts designed also needs to be considered in order to be easy to use, interesting and efficient in lectures. The purpose of this research is to know the practical level of statistical physics handout based on KKNI and a constructivist approach. This research is a part of research and development with 4-D model developed by Thiagarajan. This research activity has reached part of development test at Development stage. Data collection took place by using a questionnaire distributed to lecturers and students. Data analysis using descriptive data analysis techniques in the form of percentage. The analysis of the questionnaire shows that the handout of statistical physics has very practical criteria. The conclusion of this study is statistical physics handouts based on the KKNI and constructivist approach have been practically used in lectures.