NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
Transforming for Distribution Based Logistics
2005-05-26
distribution process, and extracts elements of distribution and distribution management . Finally characteristics of an effective Army distribution...eventually evolve into a Distribution Management Element. Each organization is examined based on their ability to provide centralized command, with an...distribution and distribution management that together form the distribution system. Clearly all of the physical distribution activities including
Physical Foundations of Plasma Microwave Sources Based on Anomalous Doppler Effect
2007-09-17
International Science and Technology Center ( ISTC ), Moscow. ISTC Project A-1512p Physical Foundations of Plasma Microwave Sources Based on Anomalous...07 – 31-Aug-07 5a. CONTRACT NUMBER ISTC Registration No: A-1512p 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Physical foundations of plasma microwave... ISTC 05-7008 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
A geometric theory for Lévy distributions
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-08-01
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.
A geometric theory for Lévy distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2014-08-15
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts ofmore » the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.« less
New Educational Modules Using a Cyber-Distribution System Testbed
Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching; ...
2018-03-30
At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less
New Educational Modules Using a Cyber-Distribution System Testbed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching
At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less
NASA Astrophysics Data System (ADS)
Shokri, Ali
2017-04-01
The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California
Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.
2006-01-01
The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.
Physics-based and human-derived information fusion for analysts
NASA Astrophysics Data System (ADS)
Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael
2017-05-01
Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.
THE ROLE AND DISTRIBUTION OF WRITTEN INFORMAL COMMUNICATION IN THEORETICAL HIGH ENERGY PHYSICS.
ERIC Educational Resources Information Center
LIBBEY, MILES A.; ZALTMAN, GERALD
THIS STUDY OF "PREPRINT" DISTRIBUTION IN THEORECTICAL HIGH ENERGY PHYSICS USED A QUESTIONNAIRE CIRCULATED TO ALL KNOWN HIGH ENERGY THEORISTS. A SECOND QUESTIONNAIRE WAS SENT TO A REPRESENTATIVE SAMPLE OF "PREPRINT LIBRARIANS" AT VARIOUS INSTITUTIONS IN THE U.S. AND ABROAD. BASED ON THIS DATA, THE STUDY CONCLUDED THAT AN EXPERIMENT WITH CENTRALIZED…
Size Distributions of Solar Proton Events: Methodological and Physical Restrictions
NASA Astrophysics Data System (ADS)
Miroshnichenko, L. I.; Yanke, V. G.
2016-12-01
Based on the new catalogue of solar proton events (SPEs) for the period of 1997 - 2009 (Solar Cycle 23) we revisit the long-studied problem of the event-size distributions in the context of those constructed for other solar-flare parameters. Recent results on the problem of size distributions of solar flares and proton events are briefly reviewed. Even a cursory acquaintance with this research field reveals a rather mixed and controversial picture. We concentrate on three main issues: i) SPE size distribution for {>} 10 MeV protons in Solar Cycle 23; ii) size distribution of {>} 1 GV proton events in 1942 - 2014; iii) variations of annual numbers for {>} 10 MeV proton events on long time scales (1955 - 2015). Different results are critically compared; most of the studies in this field are shown to suffer from vastly different input datasets as well as from insufficient knowledge of underlying physical processes in the SPEs under consideration. New studies in this field should be made on more distinct physical and methodological bases. It is important to note the evident similarity in size distributions of solar flares and superflares in Sun-like stars.
Launch Vehicle Debris Models and Crew Vehicle Ascent Abort Risk
NASA Technical Reports Server (NTRS)
Gee, Ken; Lawrence, Scott
2013-01-01
For manned space launch systems, a reliable abort system is required to reduce the risks associated with a launch vehicle failure during ascent. Understanding the risks associated with failure environments can be achieved through the use of physics-based models of these environments. Debris fields due to destruction of the launch vehicle is one such environment. To better analyze the risk posed by debris, a physics-based model for generating launch vehicle debris catalogs has been developed. The model predicts the mass distribution of the debris field based on formulae developed from analysis of explosions. Imparted velocity distributions are computed using a shock-physics code to model the explosions within the launch vehicle. A comparison of the debris catalog with an existing catalog for the Shuttle external tank show good comparison in the debris characteristics and the predicted debris strike probability. The model is used to analyze the effects of number of debris pieces and velocity distributions on the strike probability and risk.
A physically based catchment partitioning method for hydrological analysis
NASA Astrophysics Data System (ADS)
Menduni, Giovanni; Riboni, Vittoria
2000-07-01
We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.
Phillips, Glenn A; Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-11-01
The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient's perspective. To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing-remitting MS (RRMS) participating in a clinical trial. Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. © The Author(s), 2014.
USDA-ARS?s Scientific Manuscript database
To represent the effects of frozen soil on hydrology in cold regions, a new physically based distributed hydrological model has been developed by coupling the simultaneous heat and water model (SHAW) with the geomorphology based distributed hydrological model (GBHM), under the framework of the water...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorissen, BL; Giantsoudi, D; Unkelbach, J
Purpose: Cell survival experiments suggest that the relative biological effectiveness (RBE) of proton beams depends on linear energy transfer (LET), leading to higher RBE near the end of range. With intensity-modulated proton therapy (IMPT), multiple treatment plans that differ in the dose contribution per field may yield a similar physical dose distribution, but the RBE-weighted dose distribution may be disparate. RBE models currently do not have the required predictive power to be included in an optimization model due to the variations in experimental data. We propose an LET-based planning method that guides IMPT optimization models towards plans with reduced RBE-weightedmore » dose in surrounding organs at risk (OARs) compared to inverse planning based on physical dose alone. Methods: Optimization models for physical dose are extended with a term for dose times LET (doseLET). Monte Carlo code is used to generate the physical dose and doseLET distribution of each individual pencil beam. The method is demonstrated for an atypical meningioma patient where the target volume abuts the brainstem and partially overlaps with the optic nerve. Results: A reference plan optimized based on physical dose alone yields high doseLET values in parts of the brainstem and optic nerve. Minimizing doseLET in these critical structures as an additional planning goal reduces the risk of high RBE-weighted dose. The resulting treatment plan avoids the distal fall-off of the Bragg peaks for shaping the dose distribution in front of critical stuctures. The maximum dose in the OARs evaluated with RBE models from literature is reduced by 8–14\\% with our method compared to conventional planning. Conclusion: LET-based inverse planning for IMPT offers the ability to reduce the RBE-weighted dose in OARs without sacrificing target dose. This project was in part supported by NCI - U19 CA 21239.« less
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-01-01
Background: The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient’s perspective. Objective: To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing–remitting MS (RRMS) participating in a clinical trial. Methods: Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results: Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). Conclusion: These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. PMID:24740371
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Botas, Pablo; Faculty of Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg
Purpose: We describe a treatment plan optimization method for intensity modulated proton therapy (IMPT) that avoids high values of linear energy transfer (LET) in critical structures located within or near the target volume while limiting degradation of the best possible physical dose distribution. Methods and Materials: To allow fast optimization based on dose and LET, a GPU-based Monte Carlo code was extended to provide dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dosemore » objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of LET and physical dose (LET×D). To first approximation, LET×D represents a measure of the additional biological dose that is caused by high LET. Results: The method is effective for treatments where serial critical structures with maximum dose constraints are located within or near the target. We report on 5 patients with intracranial tumors (high-grade meningiomas, base-of-skull chordomas, ependymomas) in whom the target volume overlaps with the brainstem and optic structures. In all cases, high LET×D in critical structures could be avoided while minimally compromising physical dose planning objectives. Conclusion: LET-based reoptimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose-based and relative biological effectiveness (RBE)-based planning. The method makes IMPT treatments safer by mitigating a potentially increased risk of side effects resulting from elevated RBE of proton beams near the end of range.« less
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
Adaptive Modeling of Details for Physically-Based Sound Synthesis and Propagation
2015-03-21
the interface that ensures the consistency and validity of the solution given by the two methods. Transfer functions are used to model two-way...release; distribution is unlimited. Adaptive modeling of details for physically-based sound synthesis and propagation The views, opinions and/or...Research Triangle Park, NC 27709-2211 Applied sciences, Adaptive modeling , Physcially-based, Sound synthesis, Propagation, Virtual world REPORT
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Denison, Stephanie; Trikutam, Pallavi; Xu, Fei
2014-08-01
A rich tradition in developmental psychology explores physical reasoning in infancy. However, no research to date has investigated whether infants can reason about physical objects that behave probabilistically, rather than deterministically. Physical events are often quite variable, in that similar-looking objects can be placed in similar contexts with different outcomes. Can infants rapidly acquire probabilistic physical knowledge, such as some leaves fall and some glasses break by simply observing the statistical regularity with which objects behave and apply that knowledge in subsequent reasoning? We taught 11-month-old infants physical constraints on objects and asked them to reason about the probability of different outcomes when objects were drawn from a large distribution. Infants could have reasoned either by using the perceptual similarity between the samples and larger distributions or by applying physical rules to adjust base rates and estimate the probabilities. Infants learned the physical constraints quickly and used them to estimate probabilities, rather than relying on similarity, a version of the representativeness heuristic. These results indicate that infants can rapidly and flexibly acquire physical knowledge about objects following very brief exposure and apply it in subsequent reasoning. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston
2016-01-01
Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...
An Evaluation method for C2 Cyber-Physical Systems Reliability Based on Deep Learning
2014-06-01
the reliability testing data of the system, we obtain the prior distribution of the relia- bility is 1 1( ) ( ; , )R LG R r . By Bayes theo- rem ...criticality cyber-physical sys- tems[C]//Proc of ICDCS. Piscataway, NJ: IEEE, 2010:169-178. [17] Zimmer C, Bhat B, Muller F, et al. Time-based intrusion de
Investigating the Conceptual Variation of Major Physics Textbooks
NASA Astrophysics Data System (ADS)
Stewart, John; Campbell, Richard; Clanton, Jessica
2008-04-01
The conceptual problem content of the electricity and magnetism chapters of seven major physics textbooks was investigated. The textbooks presented a total of 1600 conceptual electricity and magnetism problems. The solution to each problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content among the set of topics common to the texts. The variation of the distribution of conceptual coverage within each text is studied. The variation between the major groupings of the textbooks (conceptual, algebra-based, and calculus-based) is also studied. A measure of the conceptual complexity of the problems in each text is presented.
Sullivan, Tami P.; Titus, Jennifer A.; Holt, Laura J.; Swan, Suzanne C.; Fisher, Bonnie S.; Snow, David L.
2010-01-01
This study is among the first attempts to address a frequently articulated, yet unsubstantiated claim that sample inclusion criterion based on women’s physical aggression or victimization will yield different distributions of severity and type of partner violence and injury. Independent samples of African-American women participated in separate studies based on either inclusion criterion of women’s physical aggression or victimization. Between-groups comparisons showed that samples did not differ in physical, sexual, or psychological aggression; physical, sexual, or psychological victimization; inflicted or sustained injury. Therefore, inclusion criterion based on physical aggression or victimization did not yield unique samples of “aggressors” and “victims.” PMID:19949230
Tactical Behavior Mining of a Soldier-Based Gaming Environment (Briefing Charts)
2016-05-23
U.S. ARMY TANK AUTOMOTIVE RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Tactical Behavior Mining of a Soldier-Based Gaming Environment 5/23/2016 …Plus...Distribution Statement A. Approved for public release; distribution is unlimited. 4 One Solution: Use a Physics-Based Game Environment TARDEC VIRTUAL...EXPERIMENTS CAPABILITY VBS3 Training Game ុ Soldier Experiments 2-3 Days = Several Refights Lickert Subjective Questionaires ESP Engine
Key rate for calibration robust entanglement based BB84 quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittsovich, O.; Moroder, T.
2014-12-04
We apply the approach of verifying entanglement, which is based on the sole knowledge of the dimension of the underlying physical system to the entanglement based version of the BB84 quantum key distribution protocol. We show that the familiar one-way key rate formula holds already if one assumes the assumption that one of the parties is measuring a qubit and no further assumptions about the measurement are needed.
Effects of large vessel on temperature distribution based on photothermal coupling interaction model
NASA Astrophysics Data System (ADS)
Li, Zhifang; Zhang, Xiyang; Li, Zuoran; Li, Hui
2016-10-01
This paper is based on the finite element analysis method for studying effects of large blood vessel on temperature based on photothermal coupling interaction model, and it couples the physical field of optical transmission with the physical field of heat transfer in biological tissue by using COMSOL Multiphysics 4.4 software. The results demonstrate the cooling effect of large blood vessel, which can be potential application for the treatment of liver tumors.
NASA Astrophysics Data System (ADS)
Ramos, Elvira; Puente, Araceli; Juanes, José Antonio; Neto, João M.; Pedersen, Are; Bartsch, Inka; Scanlan, Clare; Wilkes, Robert; Van den Bergh, Erika; Ar Gall, Erwan; Melo, Ricardo
2014-06-01
A methodology to classify rocky shores along the North East Atlantic (NEA) region was developed. Previously, biotypes and the variability of environmental conditions within these were recognized based on abiotic data. A biological validation was required in order to support the ecological meaning of the physical typologies obtained. A database of intertidal macroalgae species occurring in the coastal area between Norway and the South Iberian Peninsula was generated. Semi-quantitative abundance data of the most representative macroalgal taxa were collected in three levels: common, rare or absent. Ordination and classification multivariate analyses revealed a clear latitudinal gradient in the distribution of macroalgae species resulting in two distinct groups: one northern and one southern group, separated at the coast of Brittany (France). In general, the results based on biological data coincided with the results based on physical characteristics. The ecological meaning of the coastal waters classification at a broad scale shown in this work demonstrates that it can be valuable as a practical tool for conservation and management purposes.
Consistent Measurement and Physical Character of the DSD: Disdrometer to Satellite
NASA Technical Reports Server (NTRS)
Petersen, Walt; Thurai, Merhala; Gatlin, Patrick; Tokay, Ali; Morris, Bob; Wolff, David; Pippitt, Jason; Marks, David; Berendes, Todd
2017-01-01
Objective: Validate GPM (Global Precipitation Measurement) Drop Size Distribution Retrievals: Drop size distributions (DSD) are critical to GPM DPR (Dual-frequency Precipitation Radar)-based rainfall retrievals; NASA GPM Science Requirements stipulate that the GPM Core observatory radar estimation of D (sub m) (mean diameter) shall be within plus or minus 0.5 millimeters of GV (Ground Validation); GV translates disdrometer measurements to polarimetric radar-based DSD and precipitation type retrievals (e.g., convective vs. stratiform (C/S)) for coincident match-up to GPM core overpasses; How well do we meet the requirement across product versions, rain types (e.g., C/S partitioning), and rain rates (heavy, light) and is behavior physically and internally consistent?
Distribution and determinants of maximal physical work capacity of Korean male metal workers.
Kang, D; Woo, J H; Shin, Y C
2007-12-01
The distribution of maximal physical work capacity (MPWC) can be used to establish an upper limit for energy expenditure during work (EEwork). If physically demanding work has wearing effects, there will be a negative relationship between MPWC and workload. This study was conducted to investigate the distribution of MPWC among Korean metal workers and to examine the relationship between workload and MPWC. MPWC was estimated with a bicycle ergometer using a submaximal test. Energy expenditure was estimated by measuring heart rates during work. The study subjects were 507 male employees from several metal industries in Korea. They had a lower absolute VO2max than the Caucasian populations described in previous studies. The older workers had a lower physical capacity and a greater overload at work. A negative relationship was found between MPWC and workload across all age groups. Upper limits for EEwork for all age groups and for older age groups are recommended based on the 5th percentile value of MPWC.
Free-Space Quantum Key Distribution using Polarization Entangled Photons
NASA Astrophysics Data System (ADS)
Kurtsiefer, Christian
2007-06-01
We report on a complete experimental implementation of a quantum key distribution protocol through a free space link using polarization-entangled photon pairs from a compact parametric down-conversion source [1]. Based on a BB84-equivalent protocol, we generated without interruption over 10 hours a secret key free-space optical link distance of 1.5 km with a rate up to 950 bits per second after error correction and privacy amplification. Our system is based on two time stamp units and relies on no specific hardware channel for coincidence identification besides an IP link. For that, initial clock synchronization with an accuracy of better than 2 ns is achieved, based on a conventional NTP protocol and a tiered cross correlation of time tags on both sides. Time tags are used to servo a local clock, allowing a streamed measurement on correctly identified photon pairs. Contrary to the majority of quantum key distribution systems, this approach does not require a trusted large-bandwidth random number generator, but integrates that into the physical key generation process. We discuss our current progress of implementing a key distribution via an atmospherical link during daylight conditions, and possible attack scenarios on a physical timing information side channel to a entanglement-based key distribution system. [1] I. Marcikic, A. Lamas-Linares, C. Kurtsiefer, Appl. Phys. Lett. 89, 101122 (2006).
Outcome regimes of binary raindrop collisions
NASA Astrophysics Data System (ADS)
Testik, Firat Y.
2009-11-01
This study delineates the physical conditions that are responsible for the occurrence of main outcome regimes (i.e., bounce, coalescence, and breakup) for binary drop collisions with a precipitation microphysics perspective. Physical considerations based on the collision kinetic energy and the surface energies of the colliding drops lead to the development of a theoretical regime diagram for the drop/raindrop collision outcomes in the We- p plane ( We — Weber number, p — raindrop diameter ratio). This theoretical regime diagram is supported by laboratory experimental observations of drop collisions using high-speed imaging. Results of this fundamental study bring in new insights into the quantitative understanding of drop dynamics, applications of which extend beyond precipitation microphysics. In particular, results of this drop collision study are expected to give impetus to the physics-based dynamic modeling of the drop size distributions that is essential for various typical modern engineering applications, including numerical modeling of evolution of raindrop size distribution in rain shaft.
Methodology to model the energy and greenhouse gas emissions of electronic software distributions.
Williams, Daniel R; Tang, Yinshan
2012-01-17
A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Assessing Expertise in Introductory Physics Using Categorization Task
ERIC Educational Resources Information Center
Mason, Andrew; Singh, Chandralekha
2011-01-01
The ability to categorize problems based upon underlying principles, rather than surface features or contexts, is considered one of several proxy predictors of expertise in problem solving. With inspiration from the classic study by Chi, Feltovich, and Glaser, we assess the distribution of expertise among introductory physics students by asking…
Research on stress distribution regularity of cement sheaths of radial well based on ABAQUS
NASA Astrophysics Data System (ADS)
Shi, Jihui; Cheng, Yuanfang; Li, Xiaolong; Xiao, Wen; Li, Menglai
2017-12-01
To ensure desirable outcome of hydraulic fracturing based on ultra-short radius radial systems, it is required to investigate the stress distribution regularity and stability of the cement sheath. On the basis of the theoretical model of the cement sheath stress distribution, a reservoir mechanical model was built using the finite element software, ABAQUS, according to the physical property of a certain oil reservoir of the Shengli oilfield. The stress distribution of the casing-cement-sheath-formation system under the practical condition was simulated, based on which analyses were conducted from multiple points of view. Results show that the stress on the internal interface of the cement sheath exceeds that on the external interface, and fluctuates with higher amplitudes, which means that the internal interface is the most failure-prone. The unevenness of the cement sheath stress distribution grows with the increasing horizontal principal stress ratio, and so does the variation magnitude. This indicates that higher horizontal principal stress ratios are unfavourable for the structural stability of the cement sheath. Both the wellbore quantity of the URRS and the physical property of the material can affect the cement sheath distribution. It is suggested to optimize the quantity of the radial wellbore and use cement with a lower elastic modulus and higher Poisson’s ratio. At last, the impact level of the above factor was analysed, with the help of the grey correlation analysis.
Extended Hamiltonian approach to continuous tempering
NASA Astrophysics Data System (ADS)
Gobbo, Gianpaolo; Leimkuhler, Benedict J.
2015-06-01
We introduce an enhanced sampling simulation technique based on continuous tempering, i.e., on continuously varying the temperature of the system under investigation. Our approach is mathematically straightforward, being based on an extended Hamiltonian formulation in which an auxiliary degree of freedom, determining the effective temperature, is coupled to the physical system. The physical system and its temperature evolve continuously in time according to the equations of motion derived from the extended Hamiltonian. Due to the Hamiltonian structure, it is easy to show that a particular subset of the configurations of the extended system is distributed according to the canonical ensemble for the physical system at the correct physical temperature.
Lumped versus distributed thermoregulatory control: results from a three-dimensional dynamic model.
Werner, J; Buse, M; Foegen, A
1989-01-01
In this study we use a three-dimensional model of the human thermal system, the spatial grid of which is 0.5 ... 1.0 cm. The model is based on well-known physical heat-transfer equations, and all parameters of the passive system have definite physical values. According to the number of substantially different areas and organs, 54 spatially different values are attributed to each physical parameter. Compatibility of simulation and experiment was achieved solely on the basis of physical considerations and physiological basic data. The equations were solved using a modification of the alternating direction implicit method. On the basis of this complex description of the passive system close to reality, various lumped and distributed parameter control equations were tested for control of metabolic heat production, blood flow and sweat production. The simplest control equations delivering results on closed-loop control compatible with experimental evidence were determined. It was concluded that it is essential to take into account the spatial distribution of heat production, blood flow and sweat production, and that at least for control of shivering, distributed controller gains different from the pattern of distribution of muscle tissue are required. For sweat production this is not so obvious, so that for simulation of sweating control after homogeneous heat load a lumped parameter control may be justified. Based on these conclusions three-dimensional temperature profiles for cold and heat load and the dynamics for changes of the environmental conditions were computed. In view of the exact simulation of the passive system and the compatibility with experimentally attainable variables there is good evidence that those values extrapolated by the simulation are adequately determined. The model may be used both for further analysis of the real thermoregulatory mechanisms and for special applications in environmental and clinical health care.
2017-01-01
Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications. PMID:29104259
Miah, Khalid; Potter, David K
2017-11-01
Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications.
2012-01-01
Background Long-lasting insecticidal nets (LLINs) reduce malaria transmission by protecting individuals from infectious bites, and by reducing mosquito survival. In recent years, millions of LLINs have been distributed across sub-Saharan Africa (SSA). Over time, LLINs decay physically and chemically and are destroyed, making repeated interventions necessary to prevent a resurgence of malaria. Because its effects on transmission are important (more so than the effects of individual protection), estimates of the lifetime of mass distribution rounds should be based on the effective length of epidemiological protection. Methods Simulation models, parameterised using available field data, were used to analyse how the distribution's effective lifetime depends on the transmission setting and on LLIN characteristics. Factors considered were the pre-intervention transmission level, initial coverage, net attrition, and both physical and chemical decay. An ensemble of 14 stochastic individual-based model variants for malaria in humans was used, combined with a deterministic model for malaria in mosquitoes. Results The effective lifetime was most sensitive to the pre-intervention transmission level, with a lifetime of almost 10 years at an entomological inoculation rate of two infectious bites per adult per annum (ibpapa), but of little more than 2 years at 256 ibpapa. The LLIN attrition rate and the insecticide decay rate were the next most important parameters. The lifetime was surprisingly insensitive to physical decay parameters, but this could change as physical integrity gains importance with the emergence and spread of pyrethroid resistance. Conclusions The strong dependency of the effective lifetime on the pre-intervention transmission level indicated that the required distribution frequency may vary more with the local entomological situation than with LLIN quality or the characteristics of the distribution system. This highlights the need for malaria monitoring both before and during intervention programmes, particularly since there are likely to be strong variations between years and over short distances. The majority of SSA's population falls into exposure categories where the lifetime is relatively long, but because exposure estimates are highly uncertain, it is necessary to consider subsequent interventions before the end of the expected effective lifetime based on an imprecise transmission measure. PMID:22244509
Briët, Olivier J T; Hardy, Diggory; Smith, Thomas A
2012-01-13
Long-lasting insecticidal nets (LLINs) reduce malaria transmission by protecting individuals from infectious bites, and by reducing mosquito survival. In recent years, millions of LLINs have been distributed across sub-Saharan Africa (SSA). Over time, LLINs decay physically and chemically and are destroyed, making repeated interventions necessary to prevent a resurgence of malaria. Because its effects on transmission are important (more so than the effects of individual protection), estimates of the lifetime of mass distribution rounds should be based on the effective length of epidemiological protection. Simulation models, parameterised using available field data, were used to analyse how the distribution's effective lifetime depends on the transmission setting and on LLIN characteristics. Factors considered were the pre-intervention transmission level, initial coverage, net attrition, and both physical and chemical decay. An ensemble of 14 stochastic individual-based model variants for malaria in humans was used, combined with a deterministic model for malaria in mosquitoes. The effective lifetime was most sensitive to the pre-intervention transmission level, with a lifetime of almost 10 years at an entomological inoculation rate of two infectious bites per adult per annum (ibpapa), but of little more than 2 years at 256 ibpapa. The LLIN attrition rate and the insecticide decay rate were the next most important parameters. The lifetime was surprisingly insensitive to physical decay parameters, but this could change as physical integrity gains importance with the emergence and spread of pyrethroid resistance. The strong dependency of the effective lifetime on the pre-intervention transmission level indicated that the required distribution frequency may vary more with the local entomological situation than with LLIN quality or the characteristics of the distribution system. This highlights the need for malaria monitoring both before and during intervention programmes, particularly since there are likely to be strong variations between years and over short distances. The majority of SSA's population falls into exposure categories where the lifetime is relatively long, but because exposure estimates are highly uncertain, it is necessary to consider subsequent interventions before the end of the expected effective lifetime based on an imprecise transmission measure.
NASA Technical Reports Server (NTRS)
Hilland, Jeffrey E.; Collins, Donald J.; Nichols, David A.
1991-01-01
The Distributed Active Archive Center (DAAC) at the Jet Propulsion Laboratory will support scientists specializing in physical oceanography and air-sea interaction. As part of the NASA Earth Observing System Data and Information System Version 0 the DAAC will build on existing capabilities to provide services for data product generation, archiving, distribution and management of information about data. To meet scientist's immediate needs for data, existing data sets from missions such as Seasat, Geosat, the NOAA series of satellites and the Global Positioning Satellite system will be distributed to investigators upon request. In 1992, ocean topography, wave and surface roughness data from the Topex/Poseidon radar altimeter mission will be archived and distributed. New data products will be derived from Topex/Poseidon and other sensor systems based on recommendations of the science community. In 1995, ocean wind field measurements from the NASA Scatterometer will be supported by the DAAC.
Takada, Kenta; Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji
2018-01-01
The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Sato, Tatsuhiko; Kumada, Hiroaki; Koketsu, Junichi; Takei, Hideyuki; Sakurai, Hideyuki; Sakae, Takeji
2018-01-01
Abstract The microdosimetric kinetic model (MKM) is widely used for estimating relative biological effectiveness (RBE)-weighted doses for various radiotherapies because it can determine the surviving fraction of irradiated cells based on only the lineal energy distribution, and it is independent of the radiation type and ion species. However, the applicability of the method to proton therapy has not yet been investigated thoroughly. In this study, we validated the RBE-weighted dose calculated by the MKM in tandem with the Monte Carlo code PHITS for proton therapy by considering the complete simulation geometry of the clinical proton beam line. The physical dose, lineal energy distribution, and RBE-weighted dose for a 155 MeV mono-energetic and spread-out Bragg peak (SOBP) beam of 60 mm width were evaluated. In estimating the physical dose, the calculated depth dose distribution by irradiating the mono-energetic beam using PHITS was consistent with the data measured by a diode detector. A maximum difference of 3.1% in the depth distribution was observed for the SOBP beam. In the RBE-weighted dose validation, the calculated lineal energy distributions generally agreed well with the published measurement data. The calculated and measured RBE-weighted doses were in excellent agreement, except at the Bragg peak region of the mono-energetic beam, where the calculation overestimated the measured data by ~15%. This research has provided a computational microdosimetric approach based on a combination of PHITS and MKM for typical clinical proton beams. The developed RBE-estimator function has potential application in the treatment planning system for various radiotherapies. PMID:29087492
Pattern dependence in high-speed Q-modulated distributed feedback laser.
Zhu, Hongli; Xia, Yimin; He, Jian-Jun
2015-05-04
We investigate the pattern dependence in high speed Q-modulated distributed feedback laser based on its complete physical structure and material properties. The structure parameters of the gain section as well as the modulation and phase sections are all taken into account in the simulations based on an integrated traveling wave model. Using this model, we show that an example Q-modulated DFB laser can achieve an extinction ratio of 6.8dB with a jitter of 4.7ps and a peak intensity fluctuation of less than 15% for 40Gbps RZ modulation signal. The simulation method is proved very useful for the complex laser structure design and high speed performance optimization, as well as for providing physical insight of the operation mechanism.
Physics collaboration and communication through emerging media: *odcasts, blogs and wikis
NASA Astrophysics Data System (ADS)
Clark, Charles W.; Williams, Jamie
2006-05-01
The entertainment and news industries are being transformed by the emergence of innovative, internet-based media tools. Audio and video downloads are beginning to compete with traditional entertainment distribution channels, and the blogosphere has become an alternative press with demonstrated news-making power of its own. The scientific community, and physics in particular, is just beginning to experiment with these tools. We believe that they have great potential for enhancing the quality and effectiveness of collaboration and communication, and that the coming generation of physicists will expect them to be used creatively. We will report on our experience in producing seminar podcasts (google ``QIBEC'' or search ``quantum'' on Apple iTunes), and on operating a distributed research institute using a group-based blog.
Ott, Alina; Trautschold, Brian; Sandhu, Devinder
2011-01-01
Soybean is a major crop that is an important source of oil and proteins. A number of genetic linkage maps have been developed in soybean. Specifically, hundreds of simple sequence repeat (SSR) markers have been developed and mapped. Recent sequencing of the soybean genome resulted in the generation of vast amounts of genetic information. The objectives of this investigation were to use SSR markers in developing a connection between genetic and physical maps and to determine the physical distribution of recombination on soybean chromosomes. A total of 2,188 SSRs were used for sequence-based physical localization on soybean chromosomes. Linkage information was used from different maps to create an integrated genetic map. Comparison of the integrated genetic linkage maps and sequence based physical maps revealed that the distal 25% of each chromosome was the most marker-dense, containing an average of 47.4% of the SSR markers and 50.2% of the genes. The proximal 25% of each chromosome contained only 7.4% of the markers and 6.7% of the genes. At the whole genome level, the marker density and gene density showed a high correlation (R(2)) of 0.64 and 0.83, respectively with the physical distance from the centromere. Recombination followed a similar pattern with comparisons indicating that recombination is high in telomeric regions, though the correlation between crossover frequency and distance from the centromeres is low (R(2) = 0.21). Most of the centromeric regions were low in recombination. The crossover frequency for the entire soybean genome was 7.2%, with extremes much higher and lower than average. The number of recombination hotspots varied from 1 to 12 per chromosome. A high correlation of 0.83 between the distribution of SSR markers and genes suggested close association of SSRs with genes. The knowledge of distribution of recombination on chromosomes may be applied in characterizing and targeting genes.
The vertical structure of aerosol optical and physical properties was measured by Lidar in Eastern Kyrgyzstan, Central Asia, from June 2008 to May 2009. Lidar measurements were supplemented with surface-based measurements of PM2.5 and PM10 mass and chemical ...
Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond
NASA Astrophysics Data System (ADS)
Casola, Francesco; van der Sar, Toeno; Yacoby, Amir
2018-01-01
The magnetic fields generated by spins and currents provide a unique window into the physics of correlated-electron materials and devices. First proposed only a decade ago, magnetometry based on the electron spin of nitrogen-vacancy (NV) defects in diamond is emerging as a platform that is excellently suited for probing condensed matter systems; it can be operated from cryogenic temperatures to above room temperature, has a dynamic range spanning from direct current to gigahertz and allows sensor-sample distances as small as a few nanometres. As such, NV magnetometry provides access to static and dynamic magnetic and electronic phenomena with nanoscale spatial resolution. Pioneering work has focused on proof-of-principle demonstrations of its nanoscale imaging resolution and magnetic field sensitivity. Now, experiments are starting to probe the correlated-electron physics of magnets and superconductors and to explore the current distributions in low-dimensional materials. In this Review, we discuss the application of NV magnetometry to the exploration of condensed matter physics, focusing on its use to study static and dynamic magnetic textures and static and dynamic current distributions.
NASA Astrophysics Data System (ADS)
Nadarajah, Saralees; Kotz, Samuel
2007-04-01
Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.
Recent progress of quantum communication in China (Conference Presentation)
NASA Astrophysics Data System (ADS)
Zhang, Qiang
2016-04-01
Quantum communication, based on the quantum physics, can provide information theoretical security. Building a global quantum network is one ultimate goal for the research of quantum information. Here, this talk will review the progress for quantum communication in China, including quantum key distribution over metropolitan area with untrustful relay, field test of quantum entanglement swapping over metropolitan network, the 2000 km quantum key distribution main trunk line, and satellite based quantum communication.
FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.
Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora
2013-09-01
In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.
Whose Culture?: Monolithic Cultures and Subcultures in Early Childhood Settings
ERIC Educational Resources Information Center
Halttunen, Leena
2017-01-01
In Finland, day care centre directors have traditionally led only a single unit, but after the recent merging of many units, most directors simultaneously lead several, physically separate units. These organizations are called distributed organizations. This study was carried out in two distributed day care organizations. The findings are based on…
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
NASA Technical Reports Server (NTRS)
Pordes, Ruth (Editor)
1989-01-01
Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under a variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, Seiji
2015-04-01
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50 % for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
A model for foreign exchange markets based on glassy Brownian systems
Trinidad-Segovia, J. E.; Clara-Rahola, J.; Puertas, A. M.; De las Nieves, F. J.
2017-01-01
In this work we extend a well-known model from arrested physical systems, and employ it in order to efficiently depict different currency pairs of foreign exchange market price fluctuation distributions. We consider the exchange rate price in the time range between 2010 and 2016 at yearly time intervals and resolved at one minute frequency. We then fit the experimental datasets with this model, and find significant qualitative symmetry between price fluctuation distributions from the currency market, and the ones belonging to colloidal particles position in arrested states. The main contribution of this paper is a well-known physical model that does not necessarily assume the independent and identically distributed (i.i.d.) restrictive condition. PMID:29206868
NASA Astrophysics Data System (ADS)
Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei
2017-06-01
In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.
Autonomous perception and decision making in cyber-physical systems
NASA Astrophysics Data System (ADS)
Sarkar, Soumik
2011-07-01
The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.
The Effect of Rheological Properties of Foods on Bolus Characteristics After Mastication
Hwang, Junah; Bae, Jung Hyun; Kang, Si Hyun; Seo, Kyung Mook; Kim, Byong Ki; Lee, Sook Young
2012-01-01
Objective To evaluate the effects of physical properties of foods on the changes of viscosity and mass as well as the particle size distribution after mastication. Method Twenty subjects with no masticatory disorders were recruited. Six grams of four solid foods of different textures (banana, tofu, cooked-rice, cookie) were provided, and the viscosity and mass after 10, 20, and 30 cycles of mastication and just before swallowing were measured. The physical properties of foods, such as hardness, cohesiveness, and adhesiveness, were measured with a texture analyzer. Wet sieving and laser diffraction were used to determine the distribution of food particle size. Results When we measured the physical characteristics of foods, the cookie was the hardest food, and the banana exhibited marked adhesiveness. Tofu and cooked-rice exhibited a highly cohesive nature. As the number of mastication cycles increased, the masses of all foods were significantly increased (p<0.05), and the viscosity was significantly decreased in the case of banana, tofu, and cooked-rice (p<0.05). The mass and viscosity of all foods were significantly different between the foods after mastication (p<0.05). Analyzing the distribution of the particle size, that of the bolus was different between foods. However, the curves representing the particle size distribution for each food were superimposable for most subjects. Conclusion The viscosity and particle size distribution of the bolus were different between solid foods that have different physical properties. Based on this result, the mastication process and food bolus formation were affected by the physical properties of the food. PMID:23342309
Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared
NASA Technical Reports Server (NTRS)
Smith, J. A.; Ballard, J. R., Jr.
1999-01-01
We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.
DEVELOPMENT OF A RATIONALLY BASED DESIGN PROTOCOL FOR THE ULTRAVIOLET LIGHT DISINFECTION PROCESS
A protocol is demonstrated for the design and evaluation of ultraviolet (UV) disinfection systems based on a mathematical model. The disinfection model incorporates the system's physical dimensions, the residence time distribution of the reactor and dispersion characteristics, th...
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
NASA Astrophysics Data System (ADS)
Korzeniewska, Ewa; Szczesny, Artur; Krawczyk, Andrzej; Murawski, Piotr; Mróz, Józef; Seme, Sebastian
2018-03-01
In this paper, the authors describe the distribution of temperatures around electroconductive pathways created by a physical vacuum deposition process on flexible textile substrates used in elastic electronics and textronics. Cordura material was chosen as the substrate. Silver with 99.99% purity was used as the deposited metal. This research was based on thermographic photographs of the produced samples. Analysis of the temperature field around the electroconductive layer was carried out using Image ThermaBase EU software. The analysis of the temperature distribution highlights the software's usefulness in determining the homogeneity of the created metal layer. Higher local temperatures and non-uniform distributions at the same time can negatively influence the work of the textronic system.
Novel Physical Model for DC Partial Discharge in Polymeric Insulators
NASA Astrophysics Data System (ADS)
Andersen, Allen; Dennison, J. R.
The physics of DC partial discharge (DCPD) continues to pose a challenge to researchers. We present a new physically-motivated model of DCPD in amorphous polymers based on our dual-defect model of dielectric breakdown. The dual-defect model is an extension of standard static mean field theories, such as the Crine model, that describe avalanche breakdown of charge carriers trapped on uniformly distributed defect sites. It assumes the presence of both high-energy chemical defects and low-energy thermally-recoverable physical defects. We present our measurements of breakdown and DCPD for several common polymeric materials in the context of this model. Improved understanding of DCPD and how it relates to eventual dielectric breakdown is critical to the fields of spacecraft charging, high voltage DC power distribution, high density capacitors, and microelectronics. This work was supported by a NASA Space Technology Research Fellowship.
Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems
2017-05-24
Functional Modeling Compiler (SCCT) FM Compiler and Key Performance Indicators (KPI) May 2018 Pending. Model Management Backbone (SCCT) MMB Demonstration...implement the agent- based distributed runtime. - KPIs for single/multicore controllers and temporal/spatial domains. - Integration of the model management ...Distributed Runtime (UCI) Not started. Model Management Backbone (SCCT) Not started. Siemens Corporation Corporate Technology Unrestricted
A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)
Glen E. Liston; Kelly Elder
2006-01-01
An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...
A development optical course based on optical fiber white light interference
NASA Astrophysics Data System (ADS)
Jiang, Haili; Sun, Qiuhua; Zhao, Yancheng; Li, Qingbo
2017-08-01
The Michelson interferometer is a very important instrument in optical part for college physics teaching. But most students only know the instrument itself and don't know how to use it in practical engineering problems. A case about optical fiber white light interference based on engineering practice was introduced in the optical teaching of college physics and then designed a development course of university physical optics part. This system based on low-coherence white light interferometric technology can be used to measure distribution strain or temperature. It also could be used in the case of temperature compensation mode.This teaching design can use the knowledge transfer rule to enable students to apply the basic knowledge in the university physics to the new knowledge domain, which can promote the students' ability of using scientific methods to solve complex engineering problems.
2015-03-13
A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
NASA Technical Reports Server (NTRS)
Ishizaka, Joji
1990-01-01
Surface phytoplankton biomass of the southeastern U.S. continental shelf area is discussed based on coastal zone color scanner (CZCS) images obtained in April 1980. Data of chlorophyll distributions are analyzed in conjunction with concurrent flow and temperature fields. Lagrangian particle tracing experiments show that the particles move consistently with the evolution of the chlorophyll patterns. A four-component physical-biological model for a horizontal plane at a nominal depth of 17 m is presented. Model simulations using various physical-biological dynamics and boundary conditions show that the variability of chlorophyll distributions is controlled by horizontal advection. Phytoplankton and nutrient fluxes, calculated using the model, show considerable variability with time. The chlorophyll distributions obtained from the CZCS images are assimilated into the model to improve the phytoplankton flux estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
VOLTTRON is an agent execution platform providing services to its agents that allow them to easily communicate with physical devices and other resources. VOLTTRON delivers an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions. VOLTTRON can independently manage a wide range of applications, such as HVAC systems, electric vehicles, distributed energy or entire building loads, leading to improved operational efficiency.
Distributed Optical Fiber Sensors Based on Optical Frequency Domain Reflectometry: A review
Wang, Chenhuan; Liu, Kun; Jiang, Junfeng; Yang, Di; Pan, Guanyi; Pu, Zelin; Liu, Tiegen
2018-01-01
Distributed optical fiber sensors (DOFS) offer unprecedented features, the most unique one of which is the ability of monitoring variations of the physical and chemical parameters with spatial continuity along the fiber. Among all these distributed sensing techniques, optical frequency domain reflectometry (OFDR) has been given tremendous attention because of its high spatial resolution and large dynamic range. In addition, DOFS based on OFDR have been used to sense many parameters. In this review, we will survey the key technologies for improving sensing range, spatial resolution and sensing performance in DOFS based on OFDR. We also introduce the sensing mechanisms and the applications of DOFS based on OFDR including strain, stress, vibration, temperature, 3D shape, flow, refractive index, magnetic field, radiation, gas and so on. PMID:29614024
Distributed Optical Fiber Sensors Based on Optical Frequency Domain Reflectometry: A review.
Ding, Zhenyang; Wang, Chenhuan; Liu, Kun; Jiang, Junfeng; Yang, Di; Pan, Guanyi; Pu, Zelin; Liu, Tiegen
2018-04-03
Distributed optical fiber sensors (DOFS) offer unprecedented features, the most unique one of which is the ability of monitoring variations of the physical and chemical parameters with spatial continuity along the fiber. Among all these distributed sensing techniques, optical frequency domain reflectometry (OFDR) has been given tremendous attention because of its high spatial resolution and large dynamic range. In addition, DOFS based on OFDR have been used to sense many parameters. In this review, we will survey the key technologies for improving sensing range, spatial resolution and sensing performance in DOFS based on OFDR. We also introduce the sensing mechanisms and the applications of DOFS based on OFDR including strain, stress, vibration, temperature, 3D shape, flow, refractive index, magnetic field, radiation, gas and so on.
2013-09-01
2012.0002- IR -EP7-A 12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A...extremist web forums is directed at Western audiences and supports Homeland attacks. (U.S. Department of Homeland Security Office of Intelligence and...23 In this context, “before the event.” 24 Yung and Benichou’s paper originally was presented at the 5th Fire
Zhang, Yunting; Zhang, Donglan; Jiang, Yanrui; Sun, Wanqi; Wang, Yan; Chen, Wenjuan; Li, Shenghui; Shi, Lu; Shen, Xiaoming; Zhang, Jun; Jiang, Fan
2015-01-01
Introduction A growing body of literature reveals the causal pathways between physical activity and brain function, indicating that increasing physical activity among children could improve rather than undermine their scholastic performance. However, past studies of physical activity and scholastic performance among students often relied on parent-reported grade information, and did not explore whether the association varied among different levels of scholastic performance. Our study among fifth-grade students in Shanghai sought to determine the association between regular physical activity and teacher-reported academic performance scores (APS), with special attention to the differential associational patterns across different strata of scholastic performance. Method A total of 2,225 students were chosen through a stratified random sampling, and a complete sample of 1470 observations were used for analysis. We used a quantile regression analysis to explore whether the association between physical activity and teacher-reported APS differs by distribution of APS. Results Minimal-intensity physical activity such as walking was positively associated with academic performance scores (β = 0.13, SE = 0.04). The magnitude of the association tends to be larger at the lower end of the APS distribution (β = 0.24, SE = 0.08) than in the higher end of the distribution (β = 0.00, SE = 0.07). Conclusion Based upon teacher-reported student academic performance, there is no evidence that spending time on frequent physical activity would undermine student’s APS. Those students who are below the average in their academic performance could be worse off in academic performance if they give up minimal-intensity physical activity. Therefore, cutting physical activity time in schools could hurt the scholastic performance among those students who were already at higher risk for dropping out due to inadequate APS. PMID:25774525
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
NASA Astrophysics Data System (ADS)
Riandry, M. A.; Ismet, I.; Akhsan, H.
2017-09-01
This study aims to produce a valid and practical statistical physics course handout on distribution function materials based on STEM. Rowntree development model is used to produce this handout. The model consists of three stages: planning, development and evaluation stages. In this study, the evaluation stage used Tessmer formative evaluation. It consists of 5 stages: self-evaluation, expert review, one-to-one evaluation, small group evaluation and field test stages. However, the handout is limited to be tested on validity and practicality aspects, so the field test stage is not implemented. The data collection technique used walkthroughs and questionnaires. Subjects of this study are students of 6th and 8th semester of academic year 2016/2017 Physics Education Study Program of Sriwijaya University. The average result of expert review is 87.31% (very valid category). One-to-one evaluation obtained the average result is 89.42%. The result of small group evaluation is 85.92%. From one-to-one and small group evaluation stages, averagestudent response to this handout is 87,67% (very practical category). Based on the results of the study, it can be concluded that the handout is valid and practical.
S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao
2012-01-01
Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...
Electrical conductivity modeling and experimental study of densely packed SWCNT networks.
Jack, D A; Yeh, C-S; Liang, Z; Li, S; Park, J G; Fielding, J C
2010-05-14
Single-walled carbon nanotube (SWCNT) networks have become a subject of interest due to their ability to support structural, thermal and electrical loadings, but to date their application has been hindered due, in large part, to the inability to model macroscopic responses in an industrial product with any reasonable confidence. This paper seeks to address the relationship between macroscale electrical conductivity and the nanostructure of a dense network composed of SWCNTs and presents a uniquely formulated physics-based computational model for electrical conductivity predictions. The proposed model incorporates physics-based stochastic parameters for the individual nanotubes to construct the nanostructure such as: an experimentally obtained orientation distribution function, experimentally derived length and diameter distributions, and assumed distributions of chirality and registry of individual CNTs. Case studies are presented to investigate the relationship between macroscale conductivity and nanostructured variations in the bulk stochastic length, diameter and orientation distributions. Simulation results correspond nicely with those available in the literature for case studies of conductivity versus length and conductivity versus diameter. In addition, predictions for the increasing anisotropy of the bulk conductivity as a function of the tube orientation distribution are in reasonable agreement with our experimental results. Examples are presented to demonstrate the importance of incorporating various stochastic characteristics in bulk conductivity predictions. Finally, a design consideration for industrial applications is discussed based on localized network power emission considerations and may lend insight to the design engineer to better predict network failure under high current loading applications.
Physics-based distributed snow models in the operational arena: Current and future challenges
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.
2017-12-01
The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.
Huang, Jie; Lan, Xinwei; Luo, Ming; Xiao, Hai
2014-07-28
This paper reports a spatially continuous distributed fiber optic sensing technique using optical carrier based microwave interferometry (OCMI), in which many optical interferometers with the same or different optical path differences are interrogated in the microwave domain and their locations can be unambiguously determined. The concept is demonstrated using cascaded weak optical reflectors along a single optical fiber, where any two arbitrary reflectors are paired to define a low-finesse Fabry-Perot interferometer. While spatially continuous (i.e., no dark zone), fully distributed strain measurement was used as an example to demonstrate the capability, the proposed concept may also be implemented on other types of waveguide or free-space interferometers and used for distributed measurement of various physical, chemical and biological quantities.
Methodology and application of combined watershed and ground-water models in Kansas
Sophocleous, M.; Perkins, S.P.
2000-01-01
Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and versatility of this relatively simple and conceptually clear approach, making public acceptance of the integrated watershed modeling system much easier. This approach also enhances model calibration and thus the reliability of model results. (C) 2000 Elsevier Science B.V.Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and ve
Quantum key distribution protocol based on contextuality monogamy
NASA Astrophysics Data System (ADS)
Singh, Jaskaran; Bharti, Kishor; Arvind
2017-06-01
The security of quantum key distribution (QKD) protocols hinges upon features of physical systems that are uniquely quantum in nature. We explore the role of quantumness, as qualified by quantum contextuality, in a QKD scheme. A QKD protocol based on the Klyachko-Can-Binicioğlu-Shumovsky (KCBS) contextuality scenario using a three-level quantum system is presented. We explicitly show the unconditional security of the protocol by a generalized contextuality monogamy relationship based on the no-disturbance principle. This protocol provides a new framework for QKD which has conceptual and practical advantages over other protocols.
Metal-matrix radiation-protective composite materials based on aluminum
NASA Astrophysics Data System (ADS)
Cherdyntsev, V. V.; Gorshenkov, M. V.; Danilov, V. D.; Kaloshkin, S. D.; Gul'bin, V. N.
2013-05-01
A method of mechanical activation providing a homogeneous distribution of reinforcing boron-bearing components and tungsten nanopowder in the matrix is recommended for making an aluminum-based radiation- protective material. Joint mechanical activation and subsequent extrusion are used to produce aluminum- based composites. The structure and the physical, mechanical and tribological characteristics of the composite materials are studied.
Perceived distributed effort in team ball sports.
Beniscelli, Violeta; Tenenbaum, Gershon; Schinke, Robert Joel; Torregrosa, Miquel
2014-01-01
In this study, we explored the multifaceted concept of perceived mental and physical effort in team sport contexts where athletes must invest individual and shared efforts to reach a common goal. Semi-structured interviews were conducted with a convenience sample of 15 Catalan professional coaches (3 women and 12 men, 3 each from the following sports: volleyball, basketball, handball, soccer, and water polo) to gain their views of three perceived effort-related dimensions: physical, psychological, and tactical. From a theoretical thematic analysis, it was found that the perception of effort is closely related to how effort is distributed within the team. Moreover, coaches viewed physical effort in relation to the frequency and intensity of the players' involvement in the game. They identified psychological effort in situations where players pay attention to proper cues, and manage emotions under difficult circumstances. Tactical effort addressed the decision-making process of players and how they fulfilled their roles while taking into account the actions of their teammates and opponents. Based on these findings, a model of perceived distributed effort was developed, which delineates the elements that compose each of the aforementioned dimensions. Implications of perceived distributed effort in team coordination and shared mental models are discussed.
Zhang, Jun; Shoham, David A.; Tesdahl, Eric
2015-01-01
Objectives. We studied simulated interventions that leveraged social networks to increase physical activity in children. Methods. We studied a real-world social network of 81 children (average age = 7.96 years) who lived in low socioeconomic status neighborhoods, and attended public schools and 1 of 2 structured afterschool programs. The sample was ethnically diverse, and 44% were overweight or obese. We used social network analysis and agent-based modeling simulations to test whether implementing a network intervention would increase children’s physical activity. We tested 3 intervention strategies. Results. The intervention that targeted opinion leaders was effective in increasing the average level of physical activity across the entire network. However, the intervention that targeted the most sedentary children was the best at increasing their physical activity levels. Conclusions. Which network intervention to implement depends on whether the goal is to shift the entire distribution of physical activity or to influence those most adversely affected by low physical activity. Agent-based modeling could be an important complement to traditional project planning tools, analogous to sample size and power analyses, to help researchers design more effective interventions for increasing children’s physical activity. PMID:25689202
Frey, Jennifer K.; Lewis, Jeremy C.; Guy, Rachel K.; Stuart, James N.
2013-01-01
Simple Summary We evaluated the influence of occurrence records with different reliability on predicted distribution of a unique, rare mammal in the American Southwest, the white-nosed coati (Nasua narica). We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. Abstract Species distributions are usually inferred from occurrence records. However, these records are prone to errors in spatial precision and reliability. Although influence of spatial errors has been fairly well studied, there is little information on impacts of poor reliability. Reliability of an occurrence record can be influenced by characteristics of the species, conditions during the observation, and observer’s knowledge. Some studies have advocated use of anecdotal data, while others have advocated more stringent evidentiary standards such as only accepting records verified by physical evidence, at least for rare or elusive species. Our goal was to evaluate the influence of occurrence records with different reliability on species distribution models (SDMs) of a unique mammal, the white-nosed coati (Nasua narica) in the American Southwest. We compared SDMs developed using maximum entropy analysis of combined bioclimatic and biophysical variables and based on seven subsets of occurrence records that varied in reliability and spatial precision. We found that the predicted distribution of the coati based on datasets that included anecdotal occurrence records were similar to those based on datasets that only included physical evidence. Coati distribution in the American Southwest was predicted to occur in southwestern New Mexico and southeastern Arizona and was defined primarily by evenness of climate and Madrean woodland and chaparral land-cover types. Coati distribution patterns in this region suggest a good model for understanding the biogeographic structure of range margins. We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. PMID:26487405
NASA Astrophysics Data System (ADS)
Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.
2011-12-01
The key to advancing the predictability of rainfall-triggered landslides is to use physically based slope-stability models that simulate the dynamical response of the subsurface moisture to spatiotemporal variability of rainfall in complex terrains. An early warning system applying such physical models has been developed to predict rainfall-induced shallow landslides over Java Island in Indonesia and Honduras. The prototyped early warning system integrates three major components: (1) a susceptibility mapping or hotspot identification component based on a land surface geospatial database (topographical information, maps of soil properties, and local landslide inventory etc.); (2) a satellite-based precipitation monitoring system (http://trmm.gsfc.nasa.gov) and a precipitation forecasting model (i.e. Weather Research Forecast); and (3) a physically-based, rainfall-induced landslide prediction model SLIDE (SLope-Infiltration-Distributed Equilibrium). The system utilizes the modified physical model to calculate a Factor of Safety (FS) that accounts for the contribution of rainfall infiltration and partial saturation to the shear strength of the soil in topographically complex terrains. The system's prediction performance has been evaluated using a local landslide inventory. In Java Island, Indonesia, evaluation of SLIDE modeling results by local news reports shows that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events. Further study of SLIDE is implemented in Honduras where Hurricane Mitch triggered widespread landslides in 1998. Results shows within the approximately 1,200 square kilometers study areas, the values of hit rates reached as high as 78% and 75%, while the error indices were 35% and 49%. Despite positive model performance, the SLIDE model is limited in the early warning system by several assumptions including, using general parameter calibration rather than in situ tests and neglecting geologic information. Advantages and limitations of this model will be discussed with respect to future applications of landslide assessment and prediction over large scales. In conclusion, integration of spatially distributed remote sensing precipitation products and in-situ datasets and physical models in this prototype system enable us to further develop a regional early warning tool in the future for forecasting storm-induced landslides.
NASA Astrophysics Data System (ADS)
Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier
2012-07-01
SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER PHYSICAL SYSTEMS
2018-02-23
FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL SYSTEMS UNIVERSITY OF TEXAS AT ARLINGTON FEBRUARY 2018 FINAL...COVERED (From - To) APR 2015 – APR 2017 4. TITLE AND SUBTITLE FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL ...dated 16 Jan 09 13. SUPPLEMENTARY NOTES 14. ABSTRACT This project studied emergent behavior in distributed cyber- physical systems (DCPS). Emergent
Ponomarev, Artem L; Costes, Sylvain V; Cucinotta, Francis A
2008-11-01
We computed probabilities to have multiple double-strand breaks (DSB), which are produced in DNA on a regional scale, and not in close vicinity, in volumes matching the size of DNA damage foci, of a large chromatin loop, and in the physical volume of DNA containing the HPRT (human hypoxanthine phosphoribosyltransferase) locus. The model is based on a Monte Carlo description of DSB formation by heavy ions in the spatial context of the entire human genome contained within the cell nucleus, as well as at the gene sequence level. We showed that a finite physical volume corresponding to a visible DNA repair focus, believed to be associated with one DSB, can contain multiple DSB due to heavy ion track structure and the DNA supercoiled topography. A corrective distribution was introduced, which was a conditional probability to have excess DSB in a focus volume, given that there was already one present. The corrective distribution was calculated for 19.5 MeV/amu N ions, 3.77 MeV/amu alpha-particles, 1000 MeV/amu Fe ions, and X-rays. The corrected initial DSB yield from the experimental data on DNA repair foci was calculated. The DSB yield based on the corrective function converts the focus yield into the DSB yield, which is comparable with the DSB yield based on the earlier PFGE experiments. The distribution of DSB within the physical limits of the HPRT gene was analyzed by a similar method as well. This corrective procedure shows the applicability of the model and empowers the researcher with a tool to better analyze focus statistics. The model enables researchers to analyze the DSB yield based on focus statistics in real experimental situations that lack one-to-one focus-to-DSB correspondance.
An alternative approach to the Boltzmann distribution through the chemical potential
NASA Astrophysics Data System (ADS)
D'Anna, Michele; Job, Georg
2016-05-01
The Boltzmann distribution is one of the most significant results of classical physics. Despite its importance and its wide range of application, at high school level it is mostly presented without any derivation or link to some basic ideas. In this contribution we present an approach based on the chemical potential that allows to derive it directly from the basic idea of thermodynamical equilibrium.
Hai Ren; Qianmei Zhang; Zhengfeng Wang; Qinfeng Guo; June Wang; Nan Liu; Kaiming Liang
2010-01-01
The distribution of the rare and endangered perennial herb Primulina tabacum Hance is restricted to eight karst caves in southern China. To conserve P. tabacum and to evaluate possible reintroduction, we studied its historical distribution and conducted field surveys of both its biotic and physical environment. We used detrended...
Camposeo, Andrea; Del Carro, Pompilio; Persano, Luana; Cyprych, Konrad; Szukalski, Adam; Sznitko, Lech; Mysliwiec, Jaroslaw; Pisignano, Dario
2014-10-28
Room-temperature nanoimprinted, DNA-based distributed feedback (DFB) laser operation at 605 nm is reported. The laser is made of a pure DNA host matrix doped with gain dyes. At high excitation densities, the emission of the untextured dye-doped DNA films is characterized by a broad emission peak with an overall line width of 12 nm and superimposed narrow peaks, characteristic of random lasing. Moreover, direct patterning of the DNA films is demonstrated with a resolution down to 100 nm, enabling the realization of both surface-emitting and edge-emitting DFB lasers with a typical line width of <0.3 nm. The resulting emission is polarized, with a ratio between the TE- and TM-polarized intensities exceeding 30. In addition, the nanopatterned devices dissolve in water within less than 2 min. These results demonstrate the possibility of realizing various physically transient nanophotonics and laser architectures, including random lasing and nanoimprinted devices, based on natural biopolymers.
Monte Carlo calculations of positron emitter yields in proton radiotherapy.
Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F
2012-03-21
Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring. © 2012 Institute of Physics and Engineering in Medicine
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Physical and Biological Controls of Copepod Aggregation and Baleen Whale Distribution
2010-09-30
1 DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited. Physical and Biological Controls of Copepod Aggregation...distribution. OBJECTIVES The objectives of this study are to • Elucidate the mechanisms of copepod aggregation in the Great South Channel, a...Physical and Biological Controls of Copepod Aggregation and Baleen Whale Distribution 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Federated software defined network operations for LHC experiments
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon
2013-09-01
The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.
Price Based Local Power Distribution Management System (Local Power Distribution Manager) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BROWN, RICHARD E.; CZARNECKI, STEPHEN; SPEARS, MICHAEL
2016-11-28
A trans-active energy micro-grid controller is implemented in the VOLTTRON distributed control platform. The system uses the price of electricity as the mechanism for conducting transactions that are used to manage energy use and to balance supply and demand. In order to allow testing and analysis of the control system, the implementation is designed to run completely as a software simulation, while allowing the inclusion of selected hardware that physically manages power. Equipment to be integrated with the micro-grid controller must have an IP (Internet Protocol)-based network connection and a software "driver" must exist to translate data communications between themore » device and the controller.« less
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.
2015-12-01
Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Gourdain, P-A
2017-05-01
In recent years, our understanding of high energy density plasmas has played an important role in improving inertial fusion confinement and in emerging new fields of physics, such as laboratory astrophysics. Every new idea required developing innovative experimental platforms at high power laser facilities, such as OMEGA or NIF. These facilities, designed to focus all their beams onto spherical targets or hohlraum windows, are now required to shine them on more complex targets. While the pointing on planar geometries is relatively straightforward, it becomes problematic for cylindrical targets or target with more complex geometries. This publication describes how the distribution of laser beams on a cylindrical target can be done simply by using a set of physical laws as a pointing procedure. The advantage of the method is threefold. First, it is straightforward, requiring no mathematical enterprise besides solving ordinary differential equations. Second, it will converge if a local optimum exists. Finally, it is computationally inexpensive. Experimental results show that this approach produces a geometrical beam distribution that yields cylindrically symmetric implosions.
NASA Astrophysics Data System (ADS)
Gourdain, P.-A.
2017-05-01
In recent years, our understanding of high energy density plasmas has played an important role in improving inertial fusion confinement and in emerging new fields of physics, such as laboratory astrophysics. Every new idea required developing innovative experimental platforms at high power laser facilities, such as OMEGA or NIF. These facilities, designed to focus all their beams onto spherical targets or hohlraum windows, are now required to shine them on more complex targets. While the pointing on planar geometries is relatively straightforward, it becomes problematic for cylindrical targets or target with more complex geometries. This publication describes how the distribution of laser beams on a cylindrical target can be done simply by using a set of physical laws as a pointing procedure. The advantage of the method is threefold. First, it is straightforward, requiring no mathematical enterprise besides solving ordinary differential equations. Second, it will converge if a local optimum exists. Finally, it is computationally inexpensive. Experimental results show that this approach produces a geometrical beam distribution that yields cylindrically symmetric implosions.
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
NASA Astrophysics Data System (ADS)
Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv
2018-02-01
New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
Integrated Joule switches for the control of current dynamics in parallel superconducting strips
NASA Astrophysics Data System (ADS)
Casaburi, A.; Heath, R. M.; Cristiano, R.; Ejrnaes, M.; Zen, N.; Ohkubo, M.; Hadfield, R. H.
2018-06-01
Understanding and harnessing the physics of the dynamic current distribution in parallel superconducting strips holds the key to creating next generation sensors for single molecule and single photon detection. Non-uniformity in the current distribution in parallel superconducting strips leads to low detection efficiency and unstable operation, preventing the scale up to large area sensors. Recent studies indicate that non-uniform current distributions occurring in parallel strips can be understood and modeled in the framework of the generalized London model. Here we build on this important physical insight, investigating an innovative design with integrated superconducting-to-resistive Joule switches to break the superconducting loops between the strips and thus control the current dynamics. Employing precision low temperature nano-optical techniques, we map the uniformity of the current distribution before- and after the resistive strip switching event, confirming the effectiveness of our design. These results provide important insights for the development of next generation large area superconducting strip-based sensors.
Distributed Diagnosis and Home Healthcare Conference
2006-09-01
and based on physics . The driving force behind the e-health industry, the third pillar of healthcare, will be electronics and mathematics and will...POCT is performed by nurses, perfusionists, and respiratory therapists . Other POCT can be performed by certified nurses and medical assistants...in rural settings to connect patients to providers who they may be unable to reach physically . However, this problem is not a problem of only rural
NASA Astrophysics Data System (ADS)
Le, Jia-Liang; Bažant, Zdeněk P.
2011-07-01
This paper extends the theoretical framework presented in the preceding Part I to the lifetime distribution of quasibrittle structures failing at the fracture of one representative volume element under constant amplitude fatigue. The probability distribution of the critical stress amplitude is derived for a given number of cycles and a given minimum-to-maximum stress ratio. The physical mechanism underlying the Paris law for fatigue crack growth is explained under certain plausible assumptions about the damage accumulation in the cyclic fracture process zone at the tip of subcritical crack. This law is then used to relate the probability distribution of critical stress amplitude to the probability distribution of fatigue lifetime. The theory naturally yields a power-law relation for the stress-life curve (S-N curve), which agrees with Basquin's law. Furthermore, the theory indicates that, for quasibrittle structures, the S-N curve must be size dependent. Finally, physical explanation is provided to the experimentally observed systematic deviations of lifetime histograms of various ceramics and bones from the Weibull distribution, and their close fits by the present theory are demonstrated.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Event parallelism: Distributed memory parallel computing for high energy physics experiments
NASA Astrophysics Data System (ADS)
Nash, Thomas
1989-12-01
This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.
NASA Astrophysics Data System (ADS)
Akasofu, S.-I.; Kamide, Y.
1998-07-01
A new approach is needed to advance magnetospheric physics in the future to achieve a much closer integration than in the past among satellite-based researchers, ground-based researchers, and theorists/modelers. Specifically, we must find efficient ways to combine two-dimensional ground-based data and single points satellite-based data to infer three-dimensional aspects of magnetospheric disturbances. For this particular integration purpose, we propose a new project. It is designed to determine the currents on the magnetospheric equatorial plane from the ionospheric current distribution which has become available by inverting ground-based magnetic data from an extensive, systematic network of observations, combined with ground-based radar measurements of ionospheric parameters, and satellite observations of auroras, electric fields, and currents. The inversion method is based on the KRM/AMIE algorithms. In the first part of the paper, we extensively review the reliability and accuracy of the KRM and AMIE algorithms and conclude that the ionospheric quantities thus obtained are accurate enough for the next step. In the second part, the ionospheric current distribution thus obtained is projected onto the equatorial plane. This process requires a close cooperation with modelers in determining an accurate configuration of the magnetospheric field lines. If we succeed in this projection, we should be able to study the changing distribution of the currents in a vast region of the magnetospheric equatorial plane for extended periods with a time resolution of about 5 min. This process requires a model of the magnetosphere for the different phases of the magnetospheric substorm. Satellite-based observations are needed to calibrate the projection results. Agreements and disagreements thus obtained will be crucial for theoretical studies of magnetospheric plasma convection and dynamics, particularly in studying substorms. Nothing is easy in these procedures. However, unless we can overcome the associated difficulties, we may not be able to make distinct progresses. We believe that the proposed project is one way to draw the three groups closer together in advancing magnetospheric physics in the future. It is important to note that the proposed project has become possible because ground-based space physics has made a major advance during the last decade.
The in Silico Insight into Carbon Nanotube and Nucleic Acid Bases Interaction.
Karimi, Ali Asghar; Ghalandari, Behafarid; Tabatabaie, Seyed Saleh; Farhadi, Mohammad
2016-05-01
To explore practical applications of carbon nanotubes (CNTs) in biomedical fields the properties of their interaction with biomolecules must be revealed. Recent years, the interaction of CNTs with biomolecules is a subject of research interest for practical applications so that previous research explored that CNTs have complementary structure properties with single strand DNA (ssDNA). Hence, the quantum mechanics (QM) method based on ab initio was used for this purpose. Therefore values of binding energy, charge distribution, electronic energy and other physical properties of interaction were studied for interaction of nucleic acid bases and SCNT. In this study, the interaction between nucleic acid bases and a (4, 4) single-walled carbon nanotube (SCNT) were investigated through calculations within quantum mechanics (QM) method at theoretical level of Hartree-Fock (HF) method using 6-31G basis set. Hence, the physical properties such as electronic energy, total dipole moment, charge distributions and binding energy of nucleic acid bases interaction with SCNT were investigated based on HF method. It has been found that the guanine base adsorption is bound stronger to the outer surface of nanotube in comparison to the other bases, consistent with the recent theoretical studies. In the other words, the results explored that guanine interaction with SCNT has optimum level of electronic energy so that their interaction is stable. Also, the calculations illustrated that SCNT interact to nucleic acid bases by noncovalent interaction because of charge distribution an electrostatic area is created in place of interaction. Consequently, small diameter SCNT interaction with nucleic acid bases is noncovalent. Also, the results revealed that small diameter SCNT interaction especially SCNT (4, 4) with nucleic acid bases can be useful in practical application area of biomedical fields such detection and drug delivery.
Capillary Discharge Thruster Experiments and Modeling (Briefing Charts)
2016-06-01
Martin1 ERC INC.1, IN-SPACE PROPULSION BRANCH, AIR FORCE RESEARCH LABORATORY EDWARDS AIR FORCE BASE, CA USA Electric propulsion systems June 2016... PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction physics i.e... Propulsion Plumes FRC Chamber Environment R.S. MARTIN (ERC INC.) DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA# 16279 3 / 30 ELECTRIC
Attraction of swimming microorganisms by solid surfaces
NASA Astrophysics Data System (ADS)
Lauga, Eric; Berke, Allison; Turner, Linda; Berg, Howard
2007-11-01
Swimming microorganisms such as spermatozoa or bacteria are usually observed to accumulate near surfaces. Here, we report on an experiment aiming at measuring the distribution of smooth-swimming E. coli when moving in a density-matched fluid and between two glass plates. The distribution for the bacteria concentration is found to peak near the glass plates, in agreement with a simple physical model based on the far-field hydrodynamics of swimming cells.
Spin-Orbit Torque and Spin Pumping in YIG/Pt with Interfacial Insertion Layers (Postprint)
2018-05-03
Distribution Statement A. Approved for public release: distribution unlimited. © 2018 AMERICAN INSTITUTE OF PHYSICS (STINFO COPY) AIR FORCE RESEARCH ...SPONSORING/MONITORING AGENCY ACRONYM(S) Air Force Research Laboratory Materials and Manufacturing Directorate Wright-Patterson Air Force Base, OH... observe a large enhancement of Gilbert damping with the insertion of Py that cannot be accounted for solely by spin pumping, revealing significant spin
Principles and Foundations for Fractionated Networked Cyber-Physical Systems
2012-07-13
spectrum between autonomy to cooperation. Our distributed comput- ing model is based on distributed knowledge sharing, and makes very few assumptions but...over the computation without the need for explicit migration. Randomization techniques will make sure that enough di- versity is maintained to allow...small UAV testbed consisting of 10 inex- pensive quadcopters at SRI. Hard ware-wise, we added heat sinks to mitigate the impact of additional heat that
Retrieval of Atmospheric Particulate Matter Using Satellite Data Over Central and Eastern China
NASA Astrophysics Data System (ADS)
Chen, G. L.; Guang, J.; Li, Y.; Che, Y. H.; Gong, S. Q.
2018-04-01
Fine particulate matter (PM2.5) is a particle cluster with diameters less than or equal to 2.5 μm. Over the past few decades, regional air pollution composed of PM2.5 has frequently occurred over Central and Eastern China. In order to estimate the concentration, distribution and other properties of PM2.5, the general retrieval models built by establishing the relationship between aerosol optical depth (AOD) and PM2.5 has been widely used in many studies, including experimental models via statistics analysis and physical models with certain physical mechanism. The statistical experimental models can't be extended to other areas or historical period due to its dependence on the ground-based observations and necessary auxiliary data, which limits its further application. In this paper, a physically based model is applied to estimate the concentration of PM2.5 over Central and Eastern China from 2007 to 2016. The ground-based PM2.5 measurements were used to be as reference data to validate our retrieval results. Then annual variation and distribution of PM2.5 concentration in the Central and Eastern China was analysed. Results shows that the annual average PM2.5 show a trend of gradually increasing and then decreasing during 2007-2016, with the highest value in 2011.
NASA Astrophysics Data System (ADS)
England, John F.; Julien, Pierre Y.; Velleux, Mark L.
2014-03-01
Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.
A TCP/IP framework for ethernet-based measurement, control and experiment data distribution
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Minny, J.
2010-11-01
A complete modular but scalable TCP/IP based scientific instrument control and data distribution system has been designed and realized. The system features an IEEE 802.3 compliant 10 Mbps Medium Access Controller (MAC) and Physical Layer Device that is suitable for the full-duplex monitoring and control of various physically widespread measurement transducers in the presence of a local network infrastructure. The cumbersomeness of exchanging and synchronizing data between the various transducer units using physical storage media led to the choice of TCP/IP as a logical alternative. The system and methods developed are scalable for broader usage over the Internet. The system comprises a PIC18f2620 and ENC28j60 based hardware and a software component written in C, Java/Javascript and Visual Basic.NET programming languages for event-level monitoring and browser user-interfaces respectively. The system exchanges data with the host network through IPv4 packets requested and received on a HTTP page. It also responds to ICMP echo, UDP and ARP requests through a user selectable integrated DHCP and static IPv4 address allocation scheme. The round-trip time, throughput and polling frequency are estimated and reported. A typical application to temperature monitoring and logging is also presented.
Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp; Yamaguchi, Hajime; Kizaki, Hisao
2015-07-15
Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV,more » spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, W; Zaghian, M; Lim, G
2015-06-15
Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less
Distant Comets in the Early Solar System
NASA Technical Reports Server (NTRS)
Meech, Karen J.
2000-01-01
The main goal of this project is to physically characterize the small outer solar system bodies. An understanding of the dynamics and physical properties of the outer solar system small bodies is currently one of planetary science's highest priorities. The measurement of the size distributions of these bodies will help constrain the early mass of the outer solar system as well as lead to an understanding of the collisional and accretional processes. A study of the physical properties of the small outer solar system bodies in comparison with comets in the inner solar system and in the Kuiper Belt will give us information about the nebular volatile distribution and small body surface processing. We will increase the database of comet nucleus sizes making it statistically meaningful (for both Short-Period and Centaur comets) to compare with those of the Trans-Neptunian Objects. In addition, we are proposing to do active ground-based observations in preparation for several upcoming space missions.
Damage and Loss Estimation for Natural Gas Networks: The Case of Istanbul
NASA Astrophysics Data System (ADS)
Çaktı, Eser; Hancılar, Ufuk; Şeşetyan, Karin; Bıyıkoǧlu, Hikmet; Şafak, Erdal
2017-04-01
Natural gas networks are one of the major lifeline systems to support human, urban and industrial activities. The continuity of gas supply is critical for almost all functions of modern life. Under natural phenomena such as earthquakes and landslides the damages to the system elements may lead to explosions and fires compromising human life and damaging physical environment. Furthermore, the disruption in the gas supply puts human activities at risk and also results in economical losses. This study is concerned with the performance of one of the largest natural gas distribution systems in the world. Physical damages to Istanbul's natural gas network are estimated under the most recent probabilistic earthquake hazard models available, as well as under simulated ground motions from physics based models. Several vulnerability functions are used in modelling damages to system elements. A first-order assessment of monetary losses to Istanbul's natural gas distribution network is also attempted.
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
Study of Parameters And Methods of LL-Ⅳ Distributed Hydrological Model in DMIP2
NASA Astrophysics Data System (ADS)
Li, L.; Wu, J.; Wang, X.; Yang, C.; Zhao, Y.; Zhou, H.
2008-05-01
: The Physics-based distributed hydrological model is considered as an important developing period from the traditional experience-hydrology to the physical hydrology. The Hydrology Laboratory of the NOAA National Weather Service proposes the first and second phase of the Distributed Model Intercomparison Project (DMIP),that it is a great epoch-making work. LL distributed hydrological model has been developed to the fourth generation since it was established in 1997 on the Fengman-I district reservoir area (11000 km2).The LL-I distributed hydrological model was born with the applications of flood control system in the Fengman-I in China. LL-II was developed under the DMIP-I support, it is combined with GIS, RS, GPS, radar rainfall measurement.LL-III was established along with Applications of LL Distributed Model on Water Resources which was supported by the 973-projects of The Ministry of Science and Technology of the People's Republic of China. LL-Ⅳ was developed to face China's water problem. Combined with Blue River and the Baron Fork River basin of DMIP-II, the convection-diffusion equation of non-saturated and saturated seepage was derived from the soil water dynamics and continuous equation. In view of the technical characteristics of the model, the advantage of using convection-diffusion equation to compute confluence overall is longer period of predictable, saving memory space, fast budgeting, clear physical concepts, etc. The determination of parameters of hydrological model is the key, including experience coefficients and parameters of physical parameters. There are methods of experience, inversion, and the optimization to determine the model parameters, and each has advantages and disadvantages. This paper briefly introduces the LL-Ⅳ distribution hydrological model equations, and particularly introduces methods of parameters determination and simulation results on Blue River and Baron Fork River basin for DMIP-II. The soil moisture diffusion coefficient and coefficient of hydraulic conductivity are involved all through the LL-Ⅳ distribution of runoff and slope convergence model, used mainly empirical formula to determine. It's used optimization methods to calculate the two parameters of evaporation capacity (coefficient of bare land and vegetation land), two parameters of interception and wave velocity of Overland Flow, interflow and groundwater. The approach of determining wave velocity of River Network confluence and diffusion coefficient is: 1. Estimate roughness based mainly on digital information such as land use, soil texture, etc. 2.Establish the empirical formula. Another method is called convection-diffusion numerical inversion.
Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo
2018-03-01
This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.
NASA Astrophysics Data System (ADS)
Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun
2017-09-01
The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.
A Petri net controller for distributed hierarchical systems. Thesis
NASA Technical Reports Server (NTRS)
Peck, Joseph E.
1991-01-01
The solutions to a wide variety of problems are often best organized as a distributed hierarchical system. These systems can be graphically and mathematically modeled through the use of Petri nets, which can easily represent synchronous, asynchronous, and concurrent operations. This thesis presents a controller implementation based on Petri nets and a design methodology for the interconnection of distributed Petri nets. Two case studies are presented in which the controller operates a physical system, the Center for Intelligent Robotic Systems for Space Exploration Dual Arm Robotic Testbed.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2013-04-01
The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006. Growth-collapse and decay-surge evolutions, and geometric Langevin equations, Physica A, 367, 106 - 128.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, S.
2015-12-01
In order to study energetic plasma phenomena by using particle-in-cell (PIC) and Monte-Carlo simulations, we need to deal with relativistic velocity distributions in these simulations. However, numerical algorithms to deal with relativistic distributions are not well known. In this contribution, we overview basic algorithms to load relativistic Maxwell distributions in PIC and Monte-Carlo simulations. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are newly proposed in a physically transparent manner. Their acceptance efficiencies are 50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Asymptotic formulae for likelihood-based tests of new physics
NASA Astrophysics Data System (ADS)
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
Guaranteeing Spoof-Resilient Multi-Robot Networks
2016-02-12
key-distribution. Our core contribution is a novel al- gorithm implemented on commercial Wi - Fi radios that can “sense” spoofers using the physics of...encrypted key exchange, but rather a commercial Wi - Fi card and software to implement our so- lution. Our virtual sensor leverages the rich physical...cheap commodity Wi - Fi radios, unlike hardware-based solutions [46, 48]. (3) It is robust to client mobility and power-scaling at- tacks. Finally, our
Phillips, Jeffrey
2014-01-01
A physical property inversion approach based on the use of 3D (or 2D) Fourier transforms to calculate the potential-field within a 3D (or 2D) volume from a known physical property distribution within the volume is described. Topographic surfaces and observations at arbitrary locations are easily accommodated. The limitations of the approach and applications to real data are considered.
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
Upconversion-based receivers for quantum hacking-resistant quantum key distribution
NASA Astrophysics Data System (ADS)
Jain, Nitin; Kanter, Gregory S.
2016-07-01
We propose a novel upconversion (sum frequency generation)-based quantum-optical system design that can be employed as a receiver (Bob) in practical quantum key distribution systems. The pump governing the upconversion process is produced and utilized inside the physical receiver, making its access or control unrealistic for an external adversary (Eve). This pump facilitates several properties which permit Bob to define and control the modes that can participate in the quantum measurement. Furthermore, by manipulating and monitoring the characteristics of the pump pulses, Bob can detect a wide range of quantum hacking attacks launched by Eve.
A Study of ATLAS Grid Performance for Distributed Analysis
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Fine, Valery; Wenaus, Torre
2012-12-01
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.
Algorithms and Object-Oriented Software for Distributed Physics-Based Modeling
NASA Technical Reports Server (NTRS)
Kenton, Marc A.
2001-01-01
The project seeks to develop methods to more efficiently simulate aerospace vehicles. The goals are to reduce model development time, increase accuracy (e.g.,by allowing the integration of multidisciplinary models), facilitate collaboration by geographically- distributed groups of engineers, support uncertainty analysis and optimization, reduce hardware costs, and increase execution speeds. These problems are the subject of considerable contemporary research (e.g., Biedron et al. 1999; Heath and Dick, 2000).
Integrating labview into a distributed computing environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasemir, K. U.; Pieck, M.; Dalesio, L. R.
2001-01-01
Being easy to learn and well suited for a selfcontained desktop laboratory setup, many casual programmers prefer to use the National Instruments Lab-VIEW environment to develop their logic. An ActiveX interface is presented that allows integration into a plant-wide distributed environment based on the Experimental Physics and Industrial Control System (EPICS). This paper discusses the design decisions and provides performance information, especially considering requirements for the Spallation Neutron Source (SNS) diagnostics system.
Statistical Physics for Adaptive Distributed Control
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2005-01-01
A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.
Surficial geologic map of the Amboy 30' x 60' quadrangle, San Bernardino County, California
Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.
2010-01-01
The surficial geologic map of the Amboy 30' x 60' quadrangle presents characteristics of surficial materials for an area of approximately 5,000 km2 in the eastern Mojave Desert of southern California. This map consists of new surficial mapping conducted between 2000 and 2007, as well as compilations from previous surficial mapping. Surficial geologic units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects following deposition, and, where appropriate, the lithologic nature of the material. Many physical properties were noted and measured during the geologic mapping. This information was used to classify surficial deposits and to understand their ecological importance. We focus on physical properties that drive hydrologic, biologic, and physical processes such as particle-size distribution (PSD) and bulk density. The database contains point data representing locations of samples for both laboratory determined physical properties and semiquantitative field-based information in the database. We include the locations of all field observations and note the type of information collected in the field to help assist in assessing the quality of the mapping. The publication is separated into three parts: documentation, spatial data, and printable map graphics of the database. Documentation includes this pamphlet, which provides a discussion of the surficial geology and units and the map. Spatial data are distributed as ArcGIS Geodatabase in Microsoft Access format and are accompanied by a readme file, which describes the database contents, and FGDC metadata for the spatial map information. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files that provide a view of the spatial database at the mapped scale.
Physical and Biological Controls of Copepod Aggregation and Baleen Whale Distribution
2011-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Physical and Biological Controls of Copepod Aggregation and...DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Physical and Biological Controls of Copepod Aggregation and Baleen Whale Distribution...OBJECTIVES The objectives of this study are to: • Elucidate the mechanisms of copepod aggregation in the Great South Channel, a major
Topping, David J.; Wright, Scott A.; Griffiths, Ronald; Dean, David
2014-01-01
As the result of a 12-year program of sediment-transport research and field testing on the Colorado River (6 stations in UT and AZ), Yampa River (2 stations in CO), Little Snake River (1 station in CO), Green River (1 station in CO and 2 stations in UT), and Rio Grande (2 stations in TX), we have developed a physically based method for measuring suspended-sediment concentration and grain size at 15-minute intervals using multifrequency arrays of acoustic-Doppler profilers. This multi-frequency method is able to achieve much higher accuracies than single-frequency acoustic methods because it allows removal of the influence of changes in grain size on acoustic backscatter. The method proceeds as follows. (1) Acoustic attenuation at each frequency is related to the concentration of silt and clay with a known grain-size distribution in a river cross section using physical samples and theory. (2) The combination of acoustic backscatter and attenuation at each frequency is uniquely related to the concentration of sand (with a known reference grain-size distribution) and the concentration of silt and clay (with a known reference grain-size distribution) in a river cross section using physical samples and theory. (3) Comparison of the suspended-sand concentrations measured at each frequency using this approach then allows theory-based calculation of the median grain size of the suspended sand and final correction of the suspended-sand concentration to compensate for the influence of changing grain size on backscatter. Although this method of measuring suspended-sediment concentration is somewhat less accurate than using conventional samplers in either the EDI or EWI methods, it is much more accurate than estimating suspended-sediment concentrations using calibrated pump measurements or single-frequency acoustics. Though the EDI and EWI methods provide the most accurate measurements of suspended-sediment concentration, these measurements are labor-intensive, expensive, and may be impossible to collect at time intervals less than discharge-independent changes in suspended-sediment concentration can occur (< hours). Therefore, our physically based multi-frequency acoustic method shows promise as a cost-effective, valid approach for calculating suspended-sediment loads in river at a level of accuracy sufficient for many scientific and management purposes.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals
NASA Astrophysics Data System (ADS)
Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen
A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.
Features of the organization of bread wheat chromosome 5BS based on physical mapping.
Salina, Elena A; Nesterov, Mikhail A; Frenkel, Zeev; Kiseleva, Antonina A; Timonova, Ekaterina M; Magni, Federica; Vrána, Jan; Šafář, Jan; Šimková, Hana; Doležel, Jaroslav; Korol, Abraham; Sergeeva, Ekaterina M
2018-02-09
The IWGSC strategy for construction of the reference sequence of the bread wheat genome is based on first obtaining physical maps of the individual chromosomes. Our aim is to develop and use the physical map for analysis of the organization of the short arm of wheat chromosome 5B (5BS) which bears a number of agronomically important genes, including genes conferring resistance to fungal diseases. A physical map of the 5BS arm (290 Mbp) was constructed using restriction fingerprinting and LTC software for contig assembly of 43,776 BAC clones. The resulting physical map covered ~ 99% of the 5BS chromosome arm (111 scaffolds, N50 = 3.078 Mb). SSR, ISBP and zipper markers were employed for anchoring the BAC clones, and from these 722 novel markers were developed based on previously obtained data from partial sequencing of 5BS. The markers were mapped using a set of Chinese Spring (CS) deletion lines, and F2 and RICL populations from a cross of CS and CS-5B dicoccoides. Three approaches have been used for anchoring BAC contigs on the 5BS chromosome, including clone-by-clone screening of BACs, GenomeZipper analysis, and comparison of BAC-fingerprints with in silico fingerprinting of 5B pseudomolecules of T. dicoccoides. These approaches allowed us to reach a high level of BAC contig anchoring: 96% of 5BS BAC contigs were located on 5BS. An interesting pattern was revealed in the distribution of contigs along the chromosome. Short contigs (200-999 kb) containing markers for the regions interrupted by tandem repeats, were mainly localized to the 5BS subtelomeric block; whereas the distribution of larger 1000-3500 kb contigs along the chromosome better correlated with the distribution of the regions syntenic to rice, Brachypodium, and sorghum, as detected by the Zipper approach. The high fingerprinting quality, LTC software and large number of BAC clones selected by the informative markers in screening of the 43,776 clones allowed us to significantly increase the BAC scaffold length when compared with the published physical maps for other wheat chromosomes. The genetic and bioinformatics resources developed in this study provide new possibilities for exploring chromosome organization and for breeding applications.
NASA Astrophysics Data System (ADS)
Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.
2017-12-01
The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.
NASA Astrophysics Data System (ADS)
Hillebrand, Malcolm; Paterson-Jones, Guy; Kalosakas, George; Skokos, Charalampos
2018-03-01
In modeling DNA chains, the number of alternations between Adenine-Thymine (AT) and Guanine-Cytosine (GC) base pairs can be considered as a measure of the heterogeneity of the chain, which in turn could affect its dynamics. A probability distribution function of the number of these alternations is derived for circular or periodic DNA. Since there are several symmetries to account for in the periodic chain, necklace counting methods are used. In particular, Polya's Enumeration Theorem is extended for the case of a group action that preserves partitioned necklaces. This, along with the treatment of generating functions as formal power series, allows for the direct calculation of the number of possible necklaces with a given number of AT base pairs, GC base pairs and alternations. The theoretically obtained probability distribution functions of the number of alternations are accurately reproduced by Monte Carlo simulations and fitted by Gaussians. The effect of the number of base pairs on the characteristics of these distributions is also discussed, as well as the effect of the ratios of the numbers of AT and GC base pairs.
NASA Astrophysics Data System (ADS)
Havens, Scott; Marks, Danny; Kormos, Patrick; Hedrick, Andrew
2017-12-01
In the Western US and many mountainous regions of the world, critical water resources and climate conditions are difficult to monitor because the observation network is generally very sparse. The critical resource from the mountain snowpack is water flowing into streams and reservoirs that will provide for irrigation, flood control, power generation, and ecosystem services. Water supply forecasting in a rapidly changing climate has become increasingly difficult because of non-stationary conditions. In response, operational water supply managers have begun to move from statistical techniques towards the use of physically based models. As we begin to transition physically based models from research to operational use, we must address the most difficult and time-consuming aspect of model initiation: the need for robust methods to develop and distribute the input forcing data. In this paper, we present a new open source framework, the Spatial Modeling for Resources Framework (SMRF), which automates and simplifies the common forcing data distribution methods. It is computationally efficient and can be implemented for both research and operational applications. We present an example of how SMRF is able to generate all of the forcing data required to a run physically based snow model at 50-100 m resolution over regions of 1000-7000 km2. The approach has been successfully applied in real time and historical applications for both the Boise River Basin in Idaho, USA and the Tuolumne River Basin in California, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of a physically based snow model possible.
Guo, Jianxin; Kumar, Sandeep; Chipley, Mark; Marcq, Olivier; Gupta, Devansh; Jin, Zhaowei; Tomar, Dheeraj S; Swabowski, Cecily; Smith, Jacquelynn; Starkey, Jason A; Singh, Satish K
2016-03-16
The impact of drug loading and distribution on higher order structure and physical stability of an interchain cysteine-based antibody drug conjugate (ADC) has been studied. An IgG1 mAb was conjugated with a cytotoxic auristatin payload following the reduction of interchain disulfides. The 2-D LC-MS analysis shows that there is a preference for certain isomers within the various drug to antibody ratios (DARs). The physical stability of the unconjugated monoclonal antibody, the ADC, and isolated conjugated species with specific DAR, were compared using calorimetric, thermal, chemical denaturation and molecular modeling techniques, as well as techniques to assess hydrophobicity. The DAR was determined to have a significant impact on the biophysical properties and stability of the ADC. The CH2 domain was significantly perturbed in the DAR6 species, which was attributable to quaternary structural changes as assessed by molecular modeling. At accelerated storage temperatures, the DAR6 rapidly forms higher molecular mass species, whereas the DAR2 and the unconjugated mAb were largely stable. Chemical denaturation study indicates that DAR6 may form multimers while DAR2 and DAR4 primarily exist in monomeric forms in solution at ambient conditions. The physical state differences were correlated with a dramatic increase in the hydrophobicity and a reduction in the surface tension of the DAR6 compared to lower DAR species. Molecular modeling of the various DAR species and their conformers demonstrates that the auristatin-based linker payload directly contributes to the hydrophobicity of the ADC molecule. Higher order structural characterization provides insight into the impact of conjugation on the conformational and colloidal factors that determine the physical stability of cysteine-based ADCs, with implications for process and formulation development.
Shi, Hui-Sheng; Kan, Li-Li
2009-03-15
The study of cementitious activity of chromium residue (CR) was carried out to formulate the properties of chromium residue-cement matrices (CRCM) by blending CR with Ordinary Portland Cement (OPC). The particle size distribution, microstructures of CR were investigated by some apparatuses, and physical properties, leaching behavior of hexavalent chromium [Cr(VI)] of CRCM were also determined by some experiments. Three types of commonly used superplasticizers (sulphonated acetone formaldehyde superplasticizer (J1), polycarboxylate-based superplasticizer (J2) and naphthalene superplasticizer (J3)) were chosen to investigate their influences on the physical properties and the Cr(VI)-immobilisation in the leachate of the CRCM hardened pastes. The results show that the CR has a certain cementitious activity. The incorporation of CR improves the pore size distribution of CRCM. The Cr(VI) concentrations in the leachate of CRCM significantly decrease by incorporation of J2. Among three superplasticizers, J2 achieves lowest Cr(VI) leaching ratio. Based on this study, it is likely to develop CR as a potential new additive used in cement-based materials.
WATER QUALITY EARLY WARNING SYSTEMS FOR SOURCE WATER AND DISTRIBUTION SYSTEM MONITORING
A variety of probes for use in continuous monitoring of water quality exist. They range from single parameter chemical/physical probes to comprehensive screening systems based on whole organism responses. Originally developed for monitoring specific characteristics of water qua...
Modeling and Simulation in Support of Testing and Evaluation
1997-03-01
contains standardized automated test methodology, synthetic stimuli and environments based on TECOM Ground Truth data and physics . The VPG is a distributed...Systems Acquisition Management (FSAM) coursebook , Defense Systems Management College, January 1994. Crocker, Charles M. “Application of the Simulation
A statistical physics perspective on criticality in financial markets
NASA Astrophysics Data System (ADS)
Bury, Thomas
2013-11-01
Stock markets are complex systems exhibiting collective phenomena and particular features such as synchronization, fluctuations distributed as power-laws, non-random structures and similarity to neural networks. Such specific properties suggest that markets operate at a very special point. Financial markets are believed to be critical by analogy to physical systems, but little statistically founded evidence has been given. Through a data-based methodology and comparison to simulations inspired by the statistical physics of complex systems, we show that the Dow Jones and index sets are not rigorously critical. However, financial systems are closer to criticality in the crash neighborhood.
A physically-based Mie–Gruneisen equation of state to determine hot spot temperature distributions
Kittell, David Erik; Yarrington, Cole Davis
2016-07-14
Here, a physically-based form of the Mie–Grüneisen equation of state (EOS) is derived for calculating 1d planar shock temperatures, as well as hot spot temperature distributions from heterogeneous impact simulations. This form utilises a multi-term Einstein oscillator model for specific heat, and is completely algebraic in terms of temperature, volume, an integrating factor, and the cold curve energy. Moreover, any empirical relation for the reference pressure and energy may be substituted into the equations via the use of a generalised reference function. The complete EOS is then applied to calculations of the Hugoniot temperature and simulation of hydrodynamic pore collapsemore » using data for the secondary explosive, hexanitrostilbene (HNS). From these results, it is shown that the choice of EOS is even more significant for determining hot spot temperature distributions than planar shock states. The complete EOS is also compared to an alternative derivation assuming that specific heat is a function of temperature alone, i.e. cv(T). Temperature discrepancies on the order of 100–600 K were observed corresponding to the shock pressures required to initiate HNS (near 10 GPa). Overall, the results of this work will improve confidence in temperature predictions. By adopting this EOS, future work may be able to assign physical meaning to other thermally sensitive constitutive model parameters necessary to predict the shock initiation and detonation of heterogeneous explosives.« less
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
A physically-based retrieval of cloud liquid water from SSM/I measurements
NASA Technical Reports Server (NTRS)
Greenwald, Thomas J.; Stephens, Graeme L.; Vonder Haar, Thomas H.
1992-01-01
A simple physical scheme is proposed for retrieving cloud liquid water over the ice-free global oceans from Special Sensor Microwave/Imager (SSM/I) observations. Details of the microwave retrieval scheme are discussed, and the microwave-derived liquid water amounts are compared with the ground radiometer and AVHRR-derived liquid water for stratocumulus clouds off the coast of California. Global distributions of the liquid water path derived by the method proposed here are presented.
NASA Astrophysics Data System (ADS)
Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.
2017-10-01
In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.
Spatio-temporal analysis of aftershock sequences in terms of Non Extensive Statistical Physics.
NASA Astrophysics Data System (ADS)
Chochlaki, Kalliopi; Vallianatos, Filippos
2017-04-01
Earth's seismicity is considered as an extremely complicated process where long-range interactions and fracturing exist (Vallianatos et al., 2016). For this reason, in order to analyze it, we use an innovative methodological approach, introduced by Tsallis (Tsallis, 1988; 2009), named Non Extensive Statistical Physics. This approach introduce a generalization of the Boltzmann-Gibbs statistical mechanics and it is based on the definition of Tsallis entropy Sq, which maximized leads the the so-called q-exponential function that expresses the probability distribution function that maximizes the Sq. In the present work, we utilize the concept of Non Extensive Statistical Physics in order to analyze the spatiotemporal properties of several aftershock series. Marekova (Marekova, 2014) suggested that the probability densities of the inter-event distances between successive aftershocks follow a beta distribution. Using the same data set we analyze the inter-event distance distribution of several aftershocks sequences in different geographic regions by calculating non extensive parameters that determine the behavior of the system and by fitting the q-exponential function, which expresses the degree of non-extentivity of the investigated system. Furthermore, the inter-event times distribution of the aftershocks as well as the frequency-magnitude distribution has been analyzed. The results supports the applicability of Non Extensive Statistical Physics ideas in aftershock sequences where a strong correlation exists along with memory effects. References C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479-487. doi:10.1007/BF01016429 C. Tsallis, Introduction to nonextensive statistical mechanics: Approaching a complex world, 2009. doi:10.1007/978-0-387-85359-8. E. Marekova, Analysis of the spatial distribution between successive earthquakes in aftershocks series, Annals of Geophysics, 57, 5, doi:10.4401/ag-6556, 2014 F. Vallianatos, G. Papadakis, G. Michas, Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497, 2016.
Evolution of Theoretical Perspectives in My Research
NASA Astrophysics Data System (ADS)
Otero, Valerie K.
2009-11-01
Over the past 10 years I have been using socio-cultural theoretical perspectives to understand how people learn physics in a highly interactive, inquiry-based physics course such as Physics and Everyday Thinking [1]. As a result of using various perspectives (e.g. Distributed Cognition and Vygotsky's Theory of Concept Formation), my understanding of how these perspectives can be useful for investigating students' learning processes has changed. In this paper, I illustrate changes in my thinking about the role of socio-cultural perspectives in understanding physics learning and describe elements of my thinking that have remained fairly stable. Finally, I will discuss pitfalls in the use of certain perspectives and discuss areas that need attention in theoretical development for PER.
A structurally based analytic model for estimation of biomass and fuel loads of woodland trees
Robin J. Tausch
2009-01-01
Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...
FleCSPH - a parallel and distributed SPH implementation based on the FleCSI framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Loiseau, Julien
2017-06-20
FleCSPH is a multi-physics compact application that exercises FleCSI parallel data structures for tree-based particle methods. In particular, FleCSPH implements a smoothed-particle hydrodynamics (SPH) solver for the solution of Lagrangian problems in astrophysics and cosmology. FleCSPH includes support for gravitational forces using the fast multipole method (FMM).
Distribution of verbal and physical violence for same and opposite genders among adolescents.
Winstok, Zeev; Enosh, Guy
2008-09-01
The present study was set up to test the perceived distribution of verbal and physical violent behaviors among same- and opposite-genders. More specifically, those perceived violent behaviors are examined as the outcome of adolescents' cost-risk goals. The study assumes two conflicting social goals: Whereas the goal of risk reduction may motivate withdrawal from conflict, and decrease the prevalence of violent events, the goal of pursuing social status may motivate initiation and/or retaliation, thus increasing the prevalence of violence. The study is based on a sample of 155 high-school students that recorded the frequency of observing violent events in their peer group over a one-week period. Findings demonstrate that for males, opponent gender had a primary effect on violence distribution. Males exhibited violence against males more frequently than against females. This result is consistent with the assumption that males set a higher priority to pursuing social status. For females, verbal violence was more frequent than physical forms of aggression. This is consistent with the assumption that females set a higher priority on avoiding risk. These results are discussed from an evolutionary cost-risk perspective.
Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1995-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.
Confidence limits for Neyman type A-distributed events.
Morand, Josselin; Deperas-Standylo, Joanna; Urbanik, Witold; Moss, Raymond; Hachem, Sabet; Sauerwein, Wolfgang; Wojcik, Andrzej
2008-01-01
The Neyman type A distribution, a generalised, 'contagious' Poisson distribution, finds application in a number of disciplines such as biology, physics and economy. In radiation biology, it best describes the distribution of chromosomal aberrations in cells that were exposed to neutrons, alpha radiations or heavy ions. Intriguingly, no method has been developed for the calculation of confidence limits (CLs) of Neyman type A-distributed events. Here, an algorithm to calculate the 95% CL of Neyman type A-distributed events is presented. Although it has been developed in response to the requirements of radiation biology, it can find application in other fields of research. The algorithm has been implemented in a PC-based computer program that can be downloaded, free of charge, from www.pu.kielce.pl/ibiol/neta.
NASA Astrophysics Data System (ADS)
Danilyan, G. V.; Klenke, J.; Kopach, Yu. N.; Krakhotin, V. A.; Novitsky, V. V.; Pavlov, V. S.; Shatalov, P. B.
2014-06-01
The results of an experiment devoted to searches for effects of rotation of fissioning nuclei in the angular distributions of prompt neutrons and gamma rays originating from the polarized-neutron-induced fission of 233U nuclei are presented. The effects discovered in these angular distributions are opposite in sign to their counterparts in the polarized-neutron-induced fission of 235U nuclei. This is at odds with data on the relative signs of respective effects in the angular distribution of alpha particles from the ternary fission of the same nuclei and may be indicative of problems in the model currently used to describe the effect in question. The report on which this article is based was presented at the seminar held at the Institute of Theoretical and Experimental Physics and dedicated to the 90th anniversary of the birth of Yu.G. Abov, corresponding member of Russian Academy of Sciences, Editor in Chief of the journal Physics of Atomic Nuclei.
Are X-rays the key to integrated computational materials engineering?
Ice, Gene E.
2015-11-01
The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less
Galaxy Distribution in Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Okamoto, T.; Yachi, S.; Habe, A.
beta-discrepancy have been pointed out from comparison of optical and X-ray observations of clusters of galaxies. To examine physical reason of beta-discrepancy, we use N-body simulation which contains two components, dark particles and galaxies which are identified by using adaptive-linking friend of friend technique at a certain red-shift. The gas component is not included here, since the gas distribution follows the dark matter distribution in dark halos (Jubio F. Navarro, Carlos S. Frenk and Simon D. M. White 1995). We find that the galaxy distribution follows the dark matter distribution, therefore beta-discrepancy does not exist, and this result is consistent with the interpretation of the beta-discrepancy by Bahcall and Lubin (1994), which was based on recent observation.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Realization of Metamaterial-Based Devices: Mathematical Theory and Physical Demonstration
2010-02-25
that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it...does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 02-25...SPONSOR/MONITOR’S REPORT NUMBER(S) AFRL-SR-AR-TR-10-0097 12. DISTRIBUTION/AVAILABILITY STATEMENT Distribution A : Approved for Public Release 13
An All-Solid Cryocooler to 100K Based on Optical Refrigeration in Yb:YLF Crystals
2014-05-06
Bahae University of New Mexico Department of Physics and Astronomy 1919 Lomas Blvd., NE Albuquerque, NM 87131 6 May 2014 Final Report...in Yb:YLF Crystals 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62601F 6. AUTHOR(S) 5d. PROJECT NUMBER 8809 Mansoor Sheik- Bahae 5e. TASK NUMBER...intentionally left blank) Approved for public release; distribution is unlimited. i Approved for public release; distribution is unlimited. Table of
R&D of Energetic Ionic Liquids
2011-11-01
Ammonia 3-6 H2O balance Properties LMP-103S AF - M315E Hydrazine Ispvac,lbf sec/lbm (e = 50:1 Pc = 300 psi) 252 (theor.) 235 (del) 266...Distribution A: Public Release, Distribution unlimited. AF - M315E is US Air Force IL-Based Monopropellant •Significant physical property and performance...6 Toxicity Assessment of AF - M315E Toxicity Testing Results PROPERTY AF - M315E HYDRAZINE LD50 (rat), mg/kg 550 60 Dermal Irritation (rabbit
Variable Order and Distributed Order Fractional Operators
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
2002-01-01
Many physical processes appear to exhibit fractional order behavior that may vary with time or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. This paper develops the concept of variable and distributed order fractional operators. Definitions based on the Riemann-Liouville definitions are introduced and behavior of the operators is studied. Several time domain definitions that assign different arguments to the order q in the Riemann-Liouville definition are introduced. For each of these definitions various characteristics are determined. These include: time invariance of the operator, operator initialization, physical realization, linearity, operational transforms. and memory characteristics of the defining kernels. A measure (m2) for memory retentiveness of the order history is introduced. A generalized linear argument for the order q allows the concept of "tailored" variable order fractional operators whose a, memory may be chosen for a particular application. Memory retentiveness (m2) and order dynamic behavior are investigated and applications are shown. The concept of distributed order operators where the order of the time based operator depends on an additional independent (spatial) variable is also forwarded. Several definitions and their Laplace transforms are developed, analysis methods with these operators are demonstrated, and examples shown. Finally operators of multivariable and distributed order are defined in their various applications are outlined.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
COMPARISON OF LARGE RIVER SAMPLING METHOD USING DIATOM METRICS
We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...
COMPARISON OF LARGE RIVER SAMPLING METHODS ON ALGAL METRICS
We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...
NASA Astrophysics Data System (ADS)
Schuller, Ivan; Wargo, Rich
2014-03-01
We will present the first in a series of videos designed and produced specifically as a pilot for the YouTube audience to playfully explore interesting and unusual phenomena that physics reveals, and their uses in modern life. No talking heads, no pedants, no complicated theory - but rather a visually captivating and often kooky comical look at exclusion principle, entanglement, tunneling and the retinue of exceedingly strange things that happen in classical and quantum physics and how we understand and actually use this weirdness each and everyday. Produced by the UC San Diego-based creative partnership between an active physicist and established university based science media producer responsible for the highly successful and comical nanoscience caper When Things Get Small, this will pilot an on-going series with the specific goal of entertaining and engaging audiences of all ages. The series has planned distribution and marketing on YouTube though the unique programming and distribution capacities of University of California Television to commence in 2013. Supported by APS, UCSD-Center for Advanced Nanoscience and UCTV.
Statistical homogeneity tests applied to large data sets from high energy physics experiments
NASA Astrophysics Data System (ADS)
Trusina, J.; Franc, J.; Kůs, V.
2017-12-01
Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.
25 Years of Self-Organized Criticality: Solar and Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Crosby, Norma B.; Dimitropoulou, Michaila; Georgoulis, Manolis K.; Hergarten, Stefan; McAteer, James; Milovanov, Alexander V.; Mineshige, Shin; Morales, Laura; Nishizuka, Naoto; Pruessner, Gunnar; Sanchez, Raul; Sharma, A. Surja; Strugarek, Antoine; Uritsky, Vadim
2016-01-01
Shortly after the seminal paper "Self-Organized Criticality: An explanation of 1/ f noise" by Bak et al. (1987), the idea has been applied to solar physics, in "Avalanches and the Distribution of Solar Flares" by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme.
Simulating galactic dust grain evolution on a moving mesh
NASA Astrophysics Data System (ADS)
McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul
2018-05-01
Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.
2014-01-01
Background To assess physical behaviour, including physical activity and sedentary behaviour, of ambulatory adolescents and young adults with cerebral palsy (CP). We compared participant physical behaviour to that of able-bodied persons and assessed differences related to Gross Motor Functioning Classification System (GMFCS) level and CP distribution (unilateral/bilateral). Methods In 48 ambulatory persons aged 16 to 24 years with spastic CP and in 32 able-bodied controls, physical behaviour was objectively determined with an accelerometer-based activity monitor. Total duration, intensity and type of physical activity were assessed and sedentary time was determined (lying and sitting). Furthermore, distribution of walking bouts and sitting bouts was specified. Results Adolescents and young adults with CP spent 8.6% of 24 hours physically active and 79.5% sedentary, corresponding with respectively 123 minutes and 1147 minutes per 24 hours. Compared to able-bodied controls, persons with CP participated 48 minutes less in physical activities (p < 0.01) and spent 80 minutes more sedentary per 24 hours (p < 0.01). Physical behaviour was not different between persons with GMFCS level I and II and only number of short sitting bouts were significantly more prevalent in persons with bilateral CP compared to unilateral CP (p < 0.05). Conclusions Ambulatory adolescents and young adults with CP are less physically active and spend more time sedentary compared to able-bodied persons, suggesting that this group may be at increased risk for health problems related to less favourable physical behaviour. Trial registration Nederlands trial register: NTR1785 PMID:24708559
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Simulation study on the impact of air distribution on formaldehyde pollutant distribution in room
NASA Astrophysics Data System (ADS)
Wu, Jingtao; Wang, Jun; Cheng, Zhu
2017-01-01
In this paper, physical and mathematical model of a room was established based on the Airpak software. The velocity distribution, air age distribution, formaldehyde concentration distribution and Predicted Mean Vote(PMV), Predicted Percentage Dissatisfied(PPD) distribution in the ward of a hospital were simulated. In addition, the air volume was doubled, the change of indoor pollutant concentration distribution was simulated. And further, the change of air age was simulated. Through the simulation, it can help arrange the position of the air supply port, so it is very necessary to increase the comfort of the staff in the room. Finally, through the simulation of pollutant concentration distribution, it can be seen that when concentration of indoor pollutants was high, the supply air flow rate should be increased appropriately. Indoor pollutant will be discharged as soon as possible, which is very beneficial to human body health.
NASA Astrophysics Data System (ADS)
Wang, Jun; Guo, Jin; Xie, Feng; Wang, Guosheng; Wu, Haoran; Song, Man; Yi, Yuanyuan
2016-10-01
This paper presents the comparative analysis of influence of doping level and doping profile of the active region on zero bias photoresponse characteristics of GaN-based p-i-n ultraviolet (UV) photodetectors operating at front- and back-illuminated. A two dimensional physically-based computer simulation of GaN-based p-i-n UV photodetectors is presented. We implemented GaN material properties and physical models taken from the literature. It is shown that absorption layer doping profile has notable impacts on the photoresponse of the device. Especially, the effect of doping concentration and distribution of the absorption layer on photoresponse is discussed in detail. In the case of front illumination, comparative to uniform n-type doping, the device with n-type Gaussian doping profiles at absorption layer has higher responsivity. Comparative to front illumination, back illuminated detector with p-type doping profiles at absorption layer has higher maximum photoresponse, while the Gaussian doping profiles have a weaker ability to enhance the device responsivity. It is demonstrated that electric field distribution, mobility degradation, and recombinations are jointly responsible for the variance of photoresponse. Our work enriches the understanding and utilization of GaN based p-i-n UV photodetectors.
Zhu, JiangLing; Shi, Yue; Fang, LeQi; Liu, XingE; Ji, ChengJun
2015-06-01
The physical and mechanical properties of wood affect the growth and development of trees, and also act as the main criteria when determining wood usage. Our understanding on patterns and controls of wood physical and mechanical properties could provide benefits for forestry management and bases for wood application and forest tree breeding. However, current studies on wood properties mainly focus on wood density and ignore other wood physical properties. In this study, we established a comprehensive database of wood physical properties across major tree species in China. Based on this database, we explored spatial patterns and driving factors of wood properties across major tree species in China. Our results showed that (i) compared with wood density, air-dried density, tangential shrinkage coefficient and resilience provide more accuracy and higher explanation power when used as the evaluation index of wood physical properties. (ii) Among life form, climatic and edaphic variables, life form is the dominant factor shaping spatial patterns of wood physical properties, climatic factors the next, and edaphic factors have the least effects, suggesting that the effects of climatic factors on spatial variations of wood properties are indirectly induced by their effects on species distribution.
Subatomic-scale force vector mapping above a Ge(001) dimer using bimodal atomic force microscopy
NASA Astrophysics Data System (ADS)
Naitoh, Yoshitaka; Turanský, Robert; Brndiar, Ján; Li, Yan Jun; Štich, Ivan; Sugawara, Yasuhiro
2017-07-01
Probing physical quantities on the nanoscale that have directionality, such as magnetic moments, electric dipoles, or the force response of a surface, is essential for characterizing functionalized materials for nanotechnological device applications. Currently, such physical quantities are usually experimentally obtained as scalars. To investigate the physical properties of a surface on the nanoscale in depth, these properties must be measured as vectors. Here we demonstrate a three-force-component detection method, based on multi-frequency atomic force microscopy on the subatomic scale and apply it to a Ge(001)-c(4 × 2) surface. We probed the surface-normal and surface-parallel force components above the surface and their direction-dependent anisotropy and expressed them as a three-dimensional force vector distribution. Access to the atomic-scale force distribution on the surface will enable better understanding of nanoscale surface morphologies, chemical composition and reactions, probing nanostructures via atomic or molecular manipulation, and provide insights into the behaviour of nano-machines on substrates.
NASA Astrophysics Data System (ADS)
Liu, Yi-Cheng; Byrnes, Tim
2016-11-01
We investigate alternative microcavity structures for exciton-polaritons consisting of photonic crystals instead of distributed Bragg reflectors. Finite-difference time-domain simulations and scattering transfer matrix methods are used to evaluate the cavity performance. The results are compared with conventional distributed Bragg reflectors. We find that in terms of the photon lifetime, the photonic crystal based microcavities are competitive, with typical lifetimes in the region of ∼20 ps being achieved. The photonic crystal microcavities have the advantage that they are compact and are frequency adjustable, showing that they are viable to investigate exciton-polariton condensation physics.
Wigner distributions for an electron
NASA Astrophysics Data System (ADS)
Kumar, Narinder; Mondal, Chandan
2018-06-01
We study the Wigner distributions for a physical electron, which reveal the multidimensional images of the electron. The physical electron is considered as a composite system of a bare electron and photon. The Wigner distributions for unpolarized, longitudinally polarized and transversely polarized electron are presented in transverse momentum plane as well as in impact-parameter plane. The spin-spin correlations between the bare electron and the physical electron are discussed. We also evaluate all the leading twist generalized transverse momentum distributions (GTMDs) for electron.
TRIQS: A toolbox for research on interacting quantum systems
NASA Astrophysics Data System (ADS)
Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka
2015-11-01
We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.
Sound propagation and absorption in foam - A distributed parameter model.
NASA Technical Reports Server (NTRS)
Manson, L.; Lieberman, S.
1971-01-01
Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail
This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.
Optimizing Power–Frequency Droop Characteristics of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guggilam, Swaroop S.; Zhao, Changhong; Dall Anese, Emiliano
This paper outlines a procedure to design power-frequency droop slopes for distributed energy resources (DERs) installed in distribution networks to optimally participate in primary frequency response. In particular, the droop slopes are engineered such that DERs respond in proportion to their power ratings and they are not unfairly penalized in power provisioning based on their location in the distribution network. The main contribution of our approach is that a guaranteed level of frequency regulation can be guaranteed at the feeder head, while ensuring that the outputs of individual DERs conform to some well-defined notion of fairness. The approach we adoptmore » leverages an optimization-based perspective and suitable linearizations of the power-flow equations to embed notions of fairness and information regarding the physics of the power flows within the distribution network into the droop slopes. Time-domain simulations from a differential algebraic equation model of the 39-bus New England test-case system augmented with three instances of the IEEE 37-node distribution-network with frequency-sensitive DERs are provided to validate our approach.« less
NASA Technical Reports Server (NTRS)
Su, Ching-Hua; Feth, S.; Hirschfeld, D.; Smith, T. M.; Wang, Ling Jun; Volz, M. P.; Lehoczky, S. L.
1999-01-01
ZnSe crystals were grown by the physical vapor transport technique under horizontal and vertical (stabilized and destabilized) configurations. Secondary ion mass spectroscopy and photoluminescence measurements were performed on the grown ZnSe samples to map the distributions of [Si], [Fe], [Cu], [Al] and [Li or Na] impurities as well as Zn vacancy, [V (sub Zn)]. Annealings of ZnSe under controlled Zn pressures were studied to correlate the measured photoluminescence emission intensity to the equilibrium Zn partial pressure. In the horizontal grown crystals the segregations of [Si], [Fe], [Al] and [V (sub Zn)] were observed along the gravity vector direction whereas in the vertically stabilized grown crystal the segregation of these point defects was radially symmetrical. No apparent pattern was observed on the measured distributions in the vertically destabilized grown crystal. The observed segregations in the three growth configurations were interpreted based on the possible buoyancy-driven convection in the vapor phase.
Spatial Inference for Distributed Remote Sensing Data
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Katzfuss, M.; Nguyen, H.
2014-12-01
Remote sensing data are inherently spatial, and a substantial portion of their value for scientific analyses derives from the information they can provide about spatially dependent processes. Geophysical variables such as atmopsheric temperature, cloud properties, humidity, aerosols and carbon dioxide all exhibit spatial patterns, and satellite observations can help us learn about the physical mechanisms driving them. However, remote sensing observations are often noisy and incomplete, so inferring properties of true geophysical fields from them requires some care. These data can also be massive, which is both a blessing and a curse: using more data drives uncertainties down, but also drives costs up, particularly when data are stored on different computers or in different physical locations. In this talk I will discuss a methodology for spatial inference on massive, distributed data sets that does not require moving large volumes of data. The idea is based on a combination of ideas including modeling spatial covariance structures with low-rank covariance matrices, and distributed estimation in sensor or wireless networks.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
Briassoulis, Demetres; Babou, Epifania; Hiskakis, Miltiadis; Scarascia, Giacomo; Picuno, Pietro; Guarde, Dorleta; Dejean, Cyril
2013-12-01
A review of agricultural plastic waste generation and consolidation in Europe is presented. A detailed geographical mapping of the agricultural plastic use and waste generation in Europe was conducted focusing on areas of high concentration of agricultural plastics. Quantitative data and analysis of the agricultural plastic waste generation by category, geographical distribution and compositional range, and physical characteristics of the agricultural plastic waste per use and the temporal distribution of the waste generation are presented. Data were collected and cross-checked from a variety of sources, including European, national and regional services and organizations, local agronomists, retailers and farmers, importers and converters. Missing data were estimated indirectly based on the recorded cultivated areas and the characteristics of the agricultural plastics commonly used in the particular regions. The temporal distribution, the composition and physical characteristics of the agricultural plastic waste streams were mapped by category and by application. This study represents the first systematic effort to map and analyse agricultural plastic waste generation and consolidation in Europe.
Ji, Chen-Chen; Xu, Mao-Wen; Bao, Shu-Juan; Cai, Chang-Jun; Lu, Zheng-Jiang; Chai, Hui; Yang, Fan; Wei, Hua
2013-10-01
Homogeneously distributed self-assembling hybrid graphene-based aerogels with 3D interconnected pores, employing three types of carbohydrates (glucose, β-cyclodextrin, and chitosan), have been fabricated by a simple hydrothermal route. Using three types of carbohydrates as morphology oriented agents and reductants can effectively tailor the microstructures, physical properties, and electrochemical performances of the products. The effects of different carbohydrates on graphene oxide reduction to form graphene-based aerogels with different microcosmic morphologies and physical properties were also systemically discussed. The electrochemical behaviors of all graphene-based aerogel samples showed remarkably strong and stable performances, which indicated that all the 3D interpenetrating microstructure graphene-based aerogel samples with well-developed porous nanostructures and interconnected conductive networks could provide fast ionic channels for electrochemical energy storage. These results demonstrate that this strategy would offer an easy and effective way to fabricate graphene-based materials. Copyright © 2013 Elsevier Inc. All rights reserved.
Fractal Analysis of Permeability of Unsaturated Fractured Rocks
Jiang, Guoping; Shi, Wei; Huang, Lili
2013-01-01
A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model. PMID:23690746
Fractal analysis of permeability of unsaturated fractured rocks.
Jiang, Guoping; Shi, Wei; Huang, Lili
2013-01-01
A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model.
Web Information Systems for Monitoring and Control of Indoor Air Quality at Subway Stations
NASA Astrophysics Data System (ADS)
Choi, Gi Heung; Choi, Gi Sang; Jang, Joo Hyoung
In crowded subway stations indoor air quality (IAQ) is a key factor for ensuring the safety, health and comfort of passengers. In this study, a framework for web-based information system in VDN environment for monitoring and control of IAQ in subway stations is suggested. Since physical variables that describing IAQ need to be closely monitored and controlled in multiple locations in subway stations, concept of distributed monitoring and control network using wireless media needs to be implemented. Connecting remote wireless sensor network and device (LonWorks) networks to the IP network based on the concept of VDN can provide a powerful, integrated, distributed monitoring and control performance, making a web-based information system possible.
Quantum key distribution with passive decoy state selection
NASA Astrophysics Data System (ADS)
Mauerer, Wolfgang; Silberhorn, Christine
2007-05-01
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
2016-01-01
A mere hyperbolic law, like the Zipf’s law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the “best (or optimal) distribution”, is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations. PMID:27812192
Supporting Collective Inquiry: A Technology Framework for Distributed Learning
NASA Astrophysics Data System (ADS)
Tissenbaum, Michael
This design-based study describes the implementation and evaluation of a technology framework to support smart classrooms and Distributed Technology Enhanced Learning (DTEL) called SAIL Smart Space (S3). S3 is an open-source technology framework designed to support students engaged in inquiry investigations as a knowledge community. To evaluate the effectiveness of S3 as a generalizable technology framework, a curriculum named PLACE (Physics Learning Across Contexts and Environments) was developed to support two grade-11 physics classes (n = 22; n = 23) engaged in a multi-context inquiry curriculum based on the Knowledge Community and Inquiry (KCI) pedagogical model. This dissertation outlines three initial design studies that established a set of design principles for DTEL curricula, and related technology infrastructures. These principles guided the development of PLACE, a twelve-week inquiry curriculum in which students drew upon their community-generated knowledge base as a source of evidence for solving ill-structured physics problems based on the physics of Hollywood movies. During the culminating smart classroom activity, the S3 framework played a central role in orchestrating student activities, including managing the flow of materials and students using real-time data mining and intelligent agents that responded to emergent class patterns. S3 supported students' construction of knowledge through the use individual, collective and collaborative scripts and technologies, including tablets and interactive large-format displays. Aggregate and real-time ambient visualizations helped the teacher act as a wondering facilitator, supporting students in their inquiry where needed. A teacher orchestration tablet gave the teacher some control over the flow of the scripted activities, and alerted him to critical moments for intervention. Analysis focuses on S3's effectiveness in supporting students' inquiry across multiple learning contexts and scales of time, and in making timely and effective use of the community's knowledge base, towards producing solutions to sophisticated, ill defined problems in the domain of physics. Video analysis examined whether S3 supported teacher orchestration, freeing him to focus less on classroom management and more on students' inquiry. Three important outcomes of this research are a set of design principles for DTEL environments, a specific technology infrastructure (S3), and a DTEL research framework.
Direct access: factors that affect physical therapist practice in the state of Ohio.
McCallum, Christine A; DiAngelis, Tom
2012-05-01
Direct access to physical therapist services is permitted by law in the majority of states and across all practice settings. Ohio enacted such legislation in 2004; however, it was unknown how direct access had affected actual clinical practice. The purpose of this study was to describe physical therapist and physical therapist practice environment factors that affect direct access practice. A 2-phase, mixed-method descriptive study was conducted. In the first phase, focus group interviews with 32 purposively selected physical therapists were completed, which resulted in 8 themes for an electronically distributed questionnaire. In the second phase, survey questionnaires were distributed to physical therapists with an e-mail address on file with the Ohio licensing board. An adjusted return rate of 23% was achieved. Data were analyzed for descriptive statistics. A constant comparative method assessed open-ended questions for common themes and patterns. Thirty-one percent of the respondents reported using direct access in physical therapist practice; however, 80% reported they would practice direct access if provided the opportunity. Physical therapists who practiced direct access were more likely to be in practice 6 years or more and hold advanced degrees beyond the entry level, were American Physical Therapy Association members, and had supportive management and organizational practice policies. The direct access physical therapist practice was generally a locally owned suburban private practice or a school-based clinic that saw approximately 6% to 10% of its patients by direct access. The majority of patients treated were adults with musculoskeletal or neuromuscular impairments. Nonresponse from e-mail may be associated with sample frame bias. Implementation of a direct access physical therapist practice model is evident in Ohio. Factors related to reimbursement and organizational policy appear to impede the process.
Three-Dimensional Electron Beam Dose Calculations.
NASA Astrophysics Data System (ADS)
Shiu, Almon Sowchee
The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
Parallel structure among environmental gradients and three trophic levels in a subarctic estuary
Speckman, Suzann G.; Piatt, John F.; Minte-Vera, C. V.; Parrish, Julia K.
2005-01-01
We assessed spatial and temporal variability in the physical environment of a subarctic estuary, and examined concurrent patterns of chlorophyll α abundance (fluorescence), and zooplankton and forage fish community structure. Surveys were conducted in lower Cook Inlet, Alaska, during late July and early August from 1997 through 1999. Principle components analysis (PCA) revealed that spatial heterogeneity in the physical oceanographic environment of lower Cook Inlet could be modeled as three marine-estuarine gradients characterized by temperature, salinity, bottom depth, and turbidity. The gradients persisted from 1997 through 1999, and PCA explained 68% to 92% of the variance in physical oceanography for each gradient-year combination. Correlations between chlorophyll α abundance and distribution and the PCA axes were weak. Chlorophyll was reduced by turbidity, and low levels occurred in areas with high levels of suspended sediments. Detrended correspondence analysis (DCA) was used to order the sample sites based on species composition and to order the zooplankton and forage fish taxa based on similarities among sample sites for each gradient-year. Correlations between the structure of the physical environment (PCA axis 1) and zooplankton community structure (DCA axis 1) were strong (r = 0.43-0.86) in all years for the three marine-estuarine gradients, suggesting that zooplankton community composition was structured by the physical environment. The physical environment (PCA) and forage fish community structure (DCA) were weakly correlated in all years along Gradient 2, defined by halocline intensity and surface temperature and salinity, even though these physical variables were more important for defining zooplankton habitats. However, the physical environment (PCA) and forage fish community structure (DCA) were strongly correlated along the primary marine-estuarine gradient (#1) in 1997 (r = 0.87) and 1998 (r = 0.82). The correlation was poor (r = 0.32) in 1999, when fish community structure changed markedly in lower Cook Inlet. Capelin (Mallotus villosus), walleye pollock (Theragra chalcogramma), and arrowtooth flounder (Atheresthes stomias) were caught farther north than in previous years. Waters were significantly colder and more saline in 1999, a La Nina year, than in other years of the study. Interannual fluctuations in environmental conditions in lower Cook Inlet did not have substantial effects on zooplankton community structure, although abundance of individual taxa varied significantly. The abundance and distribution of chlorophyll α, zooplankton and forage fish were affected much more by spatial variability in physical oceanography than by interannual variability. Our examination of physical-biological linkages in lower Cook Inlet supports the concept of "bottom-up control," i.e., that variability in the physical environment structures higher trophic-level communities by influencing their distribution and abundance across space.
Parallel structure among environmental gradients and three trophic levels in a subarctic estuary
NASA Astrophysics Data System (ADS)
Speckman, Suzann G.; Piatt, John F.; Minte-Vera, Carolina V.; Parrish, Julia K.
2005-07-01
We assessed spatial and temporal variability in the physical environment of a subarctic estuary, and examined concurrent patterns of chlorophyll α abundance (fluorescence), and zooplankton and forage fish community structure. Surveys were conducted in lower Cook Inlet, Alaska, during late July and early August from 1997 through 1999. Principle components analysis (PCA) revealed that spatial heterogeneity in the physical oceanographic environment of lower Cook Inlet could be modeled as three marine-estuarine gradients characterized by temperature, salinity, bottom depth, and turbidity. The gradients persisted from 1997 through 1999, and PCA explained 68% to 92% of the variance in physical oceanography for each gradient-year combination. Correlations between chlorophyll α abundance and distribution and the PCA axes were weak. Chlorophyll was reduced by turbidity, and low levels occurred in areas with high levels of suspended sediments. Detrended correspondence analysis (DCA) was used to order the sample sites based on species composition and to order the zooplankton and forage fish taxa based on similarities among sample sites for each gradient-year. Correlations between the structure of the physical environment (PCA axis 1) and zooplankton community structure (DCA axis 1) were strong ( r = 0.43-0.86) in all years for the three marine-estuarine gradients, suggesting that zooplankton community composition was structured by the physical environment. The physical environment (PCA) and forage fish community structure (DCA) were weakly correlated in all years along Gradient 2, defined by halocline intensity and surface temperature and salinity, even though these physical variables were more important for defining zooplankton habitats. However, the physical environment (PCA) and forage fish community structure (DCA) were strongly correlated along the primary marine-estuarine gradient (#1) in 1997 ( r = 0.87) and 1998 ( r = 0.82). The correlation was poor ( r = 0.32) in 1999, when fish community structure changed markedly in lower Cook Inlet. Capelin ( Mallotus villosus), walleye pollock ( Theragra chalcogramma), and arrowtooth flounder ( Atheresthes stomias) were caught farther north than in previous years. Waters were significantly colder and more saline in 1999, a La Niña year, than in other years of the study. Interannual fluctuations in environmental conditions in lower Cook Inlet did not have substantial effects on zooplankton community structure, although abundance of individual taxa varied significantly. The abundance and distribution of chlorophyll α, zooplankton and forage fish were affected much more by spatial variability in physical oceanography than by interannual variability. Our examination of physical-biological linkages in lower Cook Inlet supports the concept of “bottom-up control,” i.e., that variability in the physical environment structures higher trophic-level communities by influencing their distribution and abundance across space.
Pareto versus lognormal: A maximum entropy test
NASA Astrophysics Data System (ADS)
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2015-10-01
Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.
IR characteristic simulation of city scenes based on radiosity model
NASA Astrophysics Data System (ADS)
Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu
2013-09-01
Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.
Sensing network for electromagnetic fields generated by seismic activities
NASA Astrophysics Data System (ADS)
Gershenzon, Naum I.; Bambakidis, Gust; Ternovskiy, Igor V.
2014-06-01
The sensors network is becoming prolific and play now increasingly more important role in acquiring and processing information. Cyber-Physical Systems are focusing on investigation of integrated systems that includes sensing, networking, and computations. The physics of the seismic measurement and electromagnetic field measurement requires special consideration how to design electromagnetic field measurement networks for both research and detection earthquakes and explosions along with the seismic measurement networks. In addition, the electromagnetic sensor network itself could be designed and deployed, as a research tool with great deal of flexibility, the placement of the measuring nodes must be design based on systematic analysis of the seismic-electromagnetic interaction. In this article, we review the observations of the co-seismic electromagnetic field generated by earthquakes and man-made sources such as vibrations and explosions. The theoretical investigation allows the distribution of sensor nodes to be optimized and could be used to support existing geological networks. The placement of sensor nodes have to be determined based on physics of electromagnetic field distribution above the ground level. The results of theoretical investigations of seismo-electromagnetic phenomena are considered in Section I. First, we compare the relative contribution of various types of mechano-electromagnetic mechanisms and then analyze in detail the calculation of electromagnetic fields generated by piezomagnetic and electrokinetic effects.
Forest health monitoring and other environmental assessments require information on the spatial distribution of basic soil physical and chemical properties. Traditional soil surveys are not available for large areas of forestland in the western US but there are some soil resour...
SWAT ungauged: Hydrological budget and crop yield predictions in the Upper Mississippi River Basin
USDA-ARS?s Scientific Manuscript database
Physically based, distributed hydrologic models are increasingly used in assessments of water resources, best management practices, and climate and land use changes. Model performance evaluation in ungauged basins is an important research topic. In this study, we propose a framework for developing S...
Code of Federal Regulations, 2010 CFR
2010-04-01
... temperature to partially or completely inactivate the naturally occurring enzymes and to effect other physical..., storage, and distribution. The maximum safe moisture level for a food is based on its water activity (aw... procedures or identify recommended equipment. (r) Water activity (aw) is a measure of the free moisture in a...
Code of Federal Regulations, 2011 CFR
2011-04-01
... temperature to partially or completely inactivate the naturally occurring enzymes and to effect other physical..., storage, and distribution. The maximum safe moisture level for a food is based on its water activity (aw... procedures or identify recommended equipment. (r) Water activity (aw) is a measure of the free moisture in a...
Coupling fine-scale root and canopy structure using ground-based remote sensing
Brady Hardiman; Christopher Gough; John Butnor; Gil Bohrer; Matteo Detto; Peter Curtis
2017-01-01
Ecosystem physical structure, defined by the quantity and spatial distribution of biomass, influences a range of ecosystem functions. Remote sensing tools permit the non-destructive characterization of canopy and root features, potentially providing opportunities to link above- and belowground structure at fine spatial resolution in...
Physical-Mechanisms Based Reliability Analysis For Emerging Technologies
2017-05-05
irradiation is great- ly enhanced by biasing the...devices during irradiation and/or applying high field stress be- fore irradiation . The resulting defect energy distributions were evaluated after... irradiation and/or high field stress via low-frequency noise measurements. Significant increases were observed in acceptor densities for defects with
NASA Astrophysics Data System (ADS)
Kim, Jongho; Dwelle, M. Chase; Kampf, Stephanie K.; Fatichi, Simone; Ivanov, Valeriy Y.
2016-06-01
This study advances mechanistic interpretation of predictability challenges in hydro-geomorphology related to the role of soil moisture spatial variability. Using model formulations describing the physics of overland flow, variably saturated subsurface flow, and erosion and sediment transport, this study explores (1) why a basin with the same mean soil moisture can exhibit distinctly different spatial moisture distributions, (2) whether these varying distributions lead to non-unique hydro-geomorphic responses, and (3) what controls non-uniqueness in relation to the response type. Two sets of numerical experiments are carried out with two physically-based models, HYDRUS and tRIBS+VEGGIE+FEaST, and their outputs are analyzed with respect to pre-storm moisture state. The results demonstrate that distinct spatial moisture distributions for the same mean wetness arise because near-surface soil moisture dynamics exhibit different degrees of coupling with deeper-soil moisture and the process of subsurface drainage. The consequences of such variations are different depending on the type of hydrological response. Specifically, if the predominant runoff response is of infiltration excess type, the degree of non-uniqueness is related to the spatial distribution of near-surface moisture. If runoff is governed by subsurface stormflow, the extent of deep moisture contributing area and its "readiness to drain" determine the response characteristics. Because the processes of erosion and sediment transport superimpose additional controls over factors governing runoff generation and overland flow, non-uniqueness of the geomorphic response can be highly dampened or enhanced. The explanation is sediment composed by multi-size particles can alternate states of mobilization or surface shielding and the transient behavior is inherently intertwined with the availability of mobile particles. We conclude that complex nonlinear dynamics of hydro-geomorphic processes are inherent expressions of physical interactions. As complete knowledge of watershed properties, states, or forcings will always present the ultimate, if ever resolvable, challenge, deterministic predictability will remain handicapped. Coupling of uncertainty quantification methods and space-time physics-based approaches will need to evolve to facilitate mechanistic interpretations and informed practical applications.
A multipurpose computing center with distributed resources
NASA Astrophysics Data System (ADS)
Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.
2017-10-01
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
NASA Astrophysics Data System (ADS)
Wang, Zhiqiang; Jiang, Jingyi; Ma, Qing
2016-12-01
Climate change is affecting every aspect of human activities, especially the agriculture. In China, extreme drought events caused by climate change have posed a great threat to food safety. In this work we aimed to study the drought risk of maize in the farming-pastoral ecotone in Northern China based on physical vulnerability assessment. The physical vulnerability curve was constructed from the relationship between drought hazard intensity index and yield loss rate. The risk assessment of agricultural drought was conducted from the drought hazard intensity index and physical vulnerability curve. The probability distribution of drought hazard intensity index decreased from south-west to north-east and increased from south-east to north-west along the rainfall isoline. The physical vulnerability curve had a reduction effect in three parts of the farming-pastoral ecotone in Northern China, which helped to reduce drought hazard vulnerability on spring maize. The risk of yield loss ratio calculated based on physical vulnerability curve was lower compared with the drought hazard intensity index, which suggested that the capacity of spring maize to resist and adapt to drought is increasing. In conclusion, the farming-pastoral ecotone in Northern China is greatly sensitive to climate change and has a high probability of severe drought hazard. Risk assessment of physical vulnerability can help better understand the physical vulnerability to agricultural drought and can also promote measurements to adapt to climate change.
Double density dynamics: realizing a joint distribution of a physical system and a parameter system
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Moritsugu, Kei
2015-11-01
To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.
NASA Astrophysics Data System (ADS)
Gautam, Nitin
The main objectives of this thesis are to develop a robust statistical method for the classification of ocean precipitation based on physical properties to which the SSM/I is sensitive and to examine how these properties vary globally and seasonally. A two step approach is adopted for the classification of oceanic precipitation classes from multispectral SSM/I data: (1)we subjectively define precipitation classes using a priori information about the precipitating system and its possible distinct signature on SSM/I data such as scattering by ice particles aloft in the precipitating cloud, emission by liquid rain water below freezing level, the difference of polarization at 19 GHz-an indirect measure of optical depth, etc.; (2)we then develop an objective classification scheme which is found to reproduce the subjective classification with high accuracy. This hybrid strategy allows us to use the characteristics of the data to define and encode classes and helps retain the physical interpretation of classes. The classification methods based on k-nearest neighbor and neural network are developed to objectively classify six precipitation classes. It is found that the classification method based neural network yields high accuracy for all precipitation classes. An inversion method based on minimum variance approach was used to retrieve gross microphysical properties of these precipitation classes such as column integrated liquid water path, column integrated ice water path, and column integrated min water path. This classification method is then applied to 2 years (1991-92) of SSM/I data to examine and document the seasonal and global distribution of precipitation frequency corresponding to each of these objectively defined six classes. The characteristics of the distribution are found to be consistent with assumptions used in defining these six precipitation classes and also with well known climatological patterns of precipitation regions. The seasonal and global distribution of these six classes is also compared with the earlier results obtained from Comprehensive Ocean Atmosphere Data Sets (COADS). It is found that the gross pattern of the distributions obtained from SSM/I and COADS data match remarkably well with each other.
Optimal Interpolation scheme to generate reference crop evapotranspiration
NASA Astrophysics Data System (ADS)
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J
2017-06-01
For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.
Phase space effects on fast ion distribution function modeling in tokamaks
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.
2016-05-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
Phase space effects on fast ion distribution function modeling in tokamaks
White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-06-01
Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.
The effectiveness of flipped classroom learning model in secondary physics classroom setting
NASA Astrophysics Data System (ADS)
Prasetyo, B. D.; Suprapto, N.; Pudyastomo, R. N.
2018-03-01
The research aimed to describe the effectiveness of flipped classroom learning model on secondary physics classroom setting during Fall semester of 2017. The research object was Secondary 3 Physics group of Singapore School Kelapa Gading. This research was initiated by giving a pre-test, followed by treatment setting of the flipped classroom learning model. By the end of the learning process, the pupils were given a post-test and questionnaire to figure out pupils' response to the flipped classroom learning model. Based on the data analysis, 89% of pupils had passed the minimum criteria of standardization. The increment level in the students' mark was analysed by normalized n-gain formula, obtaining a normalized n-gain score of 0.4 which fulfil medium category range. Obtains from the questionnaire distributed to the students that 93% of students become more motivated to study physics and 89% of students were very happy to carry on hands-on activity based on the flipped classroom learning model. Those three aspects were used to generate a conclusion that applying flipped classroom learning model in Secondary Physics Classroom setting is effectively applicable.
Wen, Wei; Capolungo, Laurent; Patra, Anirban; ...
2017-02-23
In this work, a physics-based thermal creep model is developed based on the understanding of the microstructure in Fe-Cr alloys. This model is associated with a transition state theory based framework that considers the distribution of internal stresses at sub-material point level. The thermally activated dislocation glide and climb mechanisms are coupled in the obstacle-bypass processes for both dislocation and precipitate-type barriers. A kinetic law is proposed to track the dislocation densities evolution in the subgrain interior and in the cell wall. The predicted results show that this model, embedded in the visco-plastic self-consistent (VPSC) framework, captures well the creepmore » behaviors for primary and steady-state stages under various loading conditions. We also discuss the roles of the mechanisms involved.« less
KODAMA and VPC based Framework for Ubiquitous Systems and its Experiment
NASA Astrophysics Data System (ADS)
Takahashi, Kenichi; Amamiya, Satoshi; Iwao, Tadashige; Zhong, Guoqiang; Kainuma, Tatsuya; Amamiya, Makoto
Recently, agent technologies have attracted a lot of interest as an emerging programming paradigm. With such agent technologies, services are provided through collaboration among agents. At the same time, the spread of mobile technologies and communication infrastructures has made it possible to access the network anytime and from anywhere. Using agents and mobile technologies to realize ubiquitous computing systems, we propose a new framework based on KODAMA and VPC. KODAMA provides distributed management mechanisms by using the concept of community and communication infrastructure to deliver messages among agents without agents being aware of the physical network. VPC provides a method of defining peer-to-peer services based on agent communication with policy packages. By merging the characteristics of both KODAMA and VPC functions, we propose a new framework for ubiquitous computing environments. It provides distributed management functions according to the concept of agent communities, agent communications which are abstracted from the physical environment, and agent collaboration with policy packages. Using our new framework, we conducted a large-scale experiment in shopping malls in Nagoya, which sent advertisement e-mails to users' cellular phones according to user location and attributes. The empirical results showed that our new framework worked effectively for sales in shopping malls.
Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit
2017-02-01
To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Escalante, Agustín; Haas, Roy W; del Rincón, Inmaculada
2004-01-01
Outcome assessment in patients with rheumatoid arthritis (RA) includes measurement of physical function. We derived a scale to quantify global physical function in RA, using three performance-based rheumatology function tests (RFTs). We measured grip strength, walking velocity, and shirt button speed in consecutive RA patients attending scheduled appointments at six rheumatology clinics, repeating these measurements after a median interval of 1 year. We extracted the underlying latent variable using principal component factor analysis. We used the Bayesian information criterion to assess the global physical function scale's cross-sectional fit to criterion standards. The criteria were joint tenderness, swelling, and deformity, pain, physical disability, current work status, and vital status at 6 years after study enrolment. We computed Guyatt's responsiveness statistic for improvement according to the American College of Rheumatology (ACR) definition. Baseline functional performance data were available for 777 patients, and follow-up data were available for 681. Mean ± standard deviation for each RFT at baseline were: grip strength, 14 ± 10 kg; walking velocity, 194 ± 82 ft/min; and shirt button speed, 7.1 ± 3.8 buttons/min. Grip strength and walking velocity departed significantly from normality. The three RFTs loaded strongly on a single factor that explained ≥70% of their combined variance. We rescaled the factor to vary from 0 to 100. Its mean ± standard deviation was 41 ± 20, with a normal distribution. The new global scale had a stronger fit than the primary RFT to most of the criterion standards. It correlated more strongly with physical disability at follow-up and was more responsive to improvement defined according to the ACR20 and ACR50 definitions. We conclude that a performance-based physical function scale extracted from three RFTs has acceptable distributional and measurement properties and is responsive to clinically meaningful change. It provides a parsimonious scale to measure global physical function in RA. PMID:15225367
Coarsening of physics for biogeochemical model in NEMO
NASA Astrophysics Data System (ADS)
Bricaud, Clement; Le Sommer, Julien; Madec, Gurvan; Deshayes, Julie; Chanut, Jerome; Perruche, Coralie
2017-04-01
Ocean mesoscale and submesoscale turbulence contribute to ocean tracer transport and to shaping ocean biogeochemical tracers distribution. Representing adequately tracer transport in ocean models therefore requires to increase model resolution so that the impact of ocean turbulence is adequately accounted for. But due to supercomputers power and storage limitations, global biogeochemical models are not yet run routinely at eddying resolution. Still, because the "effective resolution" of eddying ocean models is much coarser than the physical model grid resolution, tracer transport can be reconstructed to a large extent by computing tracer transport and diffusion with a model grid resolution close to the effective resolution of the physical model. This observation has motivated the implementation of a new capability in NEMO ocean model (http://www.nemo-ocean.eu/) that allows to run the physical model and the tracer transport model at different grid resolutions. In a first time, we present results obtained with this new capability applied to a synthetic age tracer in a global eddying model configuration. In this model configuration, ocean dynamic is computed at ¼° resolution but tracer transport is computed at 3/4° resolution. The solution obtained is compared to 2 reference setup ,one at ¼° resolution for both physics and passive tracer models and one at 3/4° resolution for both physics and passive tracer model. We discuss possible options for defining the vertical diffusivity coefficient for the tracer transport model based on information from the high resolution grid. We describe the impact of this choice on the distribution and one the penetration of the age tracer. In a second time we present results obtained by coupling the physics with the biogeochemical model PISCES. We look at the impact of this methodology on some tracers distribution and dynamic. The method described here can found applications in ocean forecasting, such as the Copernicus Marine service operated by Mercator-Ocean, and in Earth System Models for climate applications.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping
2013-01-01
Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
High-capacity quantum key distribution via hyperentangled degrees of freedom
NASA Astrophysics Data System (ADS)
Simon, David S.; Sergienko, Alexander V.
2014-06-01
Quantum key distribution (QKD) has long been a promising area for the application of quantum effects in solving real-world problems. However, two major obstacles have stood in the way of its widespread application: low secure key generation rates and short achievable operating distances. In this paper, a new physical mechanism for dealing with the first of these problems is proposed: the interplay between different degrees of freedom in a hyperentangled system (parametric down-conversion) is used to increase the Hilbert space dimension available for key generation while maintaining security. Polarization-based Bell tests provide security checking, while orbital angular momentum (OAM) and total angular momentum (TAM) provide a higher key generation rate. Whether to measure TAM or OAM is decided randomly in each trial. The concurrent noncommutativity of TAM with OAM and polarization provides the physical basis for quantum security. TAM measurements link polarization to OAM, so that if the legitimate participants measure OAM while the eavesdropper measures TAM (or vice-versa), then polarization entanglement is lost, revealing the eavesdropper. In contrast to other OAM-based QKD methods, complex active switching between OAM bases is not required; instead, passive switching by beam splitters combined with much simpler active switching between polarization bases makes implementation at high OAM more practical.
Li, Yue-Song; Chen, Xin-Jun; Yang, Hong
2012-06-01
By adopting FVCOM-simulated 3-D physical field and based on the biological processes of chub mackerel (Scomber japonicas) in its early life history from the individual-based biological model, the individual-based ecological model for S. japonicas at its early growth stages in the East China Sea was constructed through coupling the physical field in March-July with the biological model by the method of Lagrange particle tracking. The model constructed could well simulate the transport process and abundance distribution of S. japonicas eggs and larvae. The Taiwan Warm Current, Kuroshio, and Tsushima Strait Warm Current directly affected the transport process and distribution of the eggs and larvae, and indirectly affected the growth and survive of the eggs and larvae through the transport to the nursery grounds with different water temperature and foods. The spawning grounds in southern East China Sea made more contributions to the recruitment to the fishing grounds in northeast East China Sea, but less to the Yangtze estuary and Zhoushan Island. The northwestern and southwestern parts of spawning grounds had strong connectivity with the nursery grounds of Cheju and Tsushima Straits, whereas the northeastern and southeastern parts of the spawning ground had strong connectivity with the nursery grounds of Kyushu and Pacific Ocean.
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
Nondeducibility-Based Analysis of Cyber-Physical Systems
NASA Astrophysics Data System (ADS)
Gamage, Thoshitha; McMillin, Bruce
Controlling information flow in a cyber-physical system (CPS) is challenging because cyber domain decisions and actions manifest themselves as visible changes in the physical domain. This paper presents a nondeducibility-based observability analysis for CPSs. In many CPSs, the capacity of a low-level (LL) observer to deduce high-level (HL) actions ranges from limited to none. However, a collaborative set of observers strategically located in a network may be able to deduce all the HL actions. This paper models a distributed power electronics control device network using a simple DC circuit in order to understand the effect of multiple observers in a CPS. The analysis reveals that the number of observers required to deduce all the HL actions in a system increases linearly with the number of configurable units. A simple definition of nondeducibility based on the uniqueness of low-level projections is also presented. This definition is used to show that a system with two security domain levels could be considered “nondeducibility secure” if no unique LL projections exist.
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
NASA Astrophysics Data System (ADS)
Kandler, K.; Lieke, K.
2009-04-01
The Saharan Mineral Dust Experiment (SAMUM) is dedicated to the understanding of the radiative effects of mineral dust. Two major field experiments were performed: A first joint field campaign took place at Ouarzazate and near Zagora, southern Morocco, from May 13 to June 7, 2006. Aircraft and ground based measurements of aerosol physical and chemical properties were carried out to collect a data set of surface and atmospheric columnar information within a major dust source. This data set combined with satellite data provides the base of the first thorough columnar radiative closure tests in Saharan dust. A second field experiment was conducted during January-February 2008, in the Cape Verde Islands region, where about 300 Tg of mineral dust are transported annually from Western Africa across the Atlantic towards the Caribbean Sea and the Amazon basin. Along its transport path, the mineral dust is expected to influence significantly the radiation budget - by direct and indirect effects - of the subtropical North Atlantic. We are lacking a radiative closure in the Saharan air plume. One focus of the investigation within the trade wind region is the spatial distribution of mixed dust/biomass/sea salt aerosol and their physical and chemical properties, especially with regard to radiative effects. We report on measurements of size distributions, mass concentrations and mineralogical composition conducted at the Zagora (Morocco) and Praia (Cape Verde islands) ground stations. The aerosol size distribution was measured from 20 nm to 500
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
Increasing precision of turbidity-based suspended sediment concentration and load estimates.
Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E
2010-01-01
Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Model-based approach for cyber-physical attack detection in water distribution systems.
Housh, Mashor; Ohar, Ziv
2018-08-01
Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.
Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack
2016-01-01
For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to 'internal photons' inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350-700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation.
Study on temporal variation and spatial distribution for rural poverty in China based on GIS
NASA Astrophysics Data System (ADS)
Feng, Xianfeng; Xu, Xiuli; Wang, Yingjie; Cui, Jing; Mo, Hongyuan; Liu, Ling; Yan, Hong; Zhang, Yan; Han, Jiafu
2009-07-01
Poverty is one of the most serious challenges all over the world, is an obstacle to hinder economics and agriculture in poverty area. Research on poverty alleviation in China is very useful and important. In this paper, we will explore the comprehensive poverty characteristics in China, analyze the current poverty status, spatial distribution and temporal variations about rural poverty in China, and to category the different poverty types and their spatial distribution. First, we achieved the gathering and processing the relevant data. These data contain investigation data, research reports, statistical yearbook, censuses, social-economic data, physical and anthrop geographical data, etc. After deeply analysis of these data, we will get the distribution of poverty areas by spatial-temporal data model according to different poverty given standard in different stages in China to see the poverty variation and the regional difference in County-level. Then, the current poverty status, spatial pattern about poverty area in villages-level will be lucubrated; the relationship among poverty, environment (including physical and anthrop geographical factors) and economic development, etc. will be expanded. We hope our research will enhance the people knowledge of poverty in China and contribute to the poverty alleviation in China.
[Evaluation of ecosystem provisioning service and its economic value].
Wu, Nan; Gao, Ji-Xi; Sudebilige; Ricketts, Taylor H; Olwero, Nasser; Luo, Zun-Lan
2010-02-01
Aiming at the fact that the current approaches of evaluating the efficacy of ecosystem provisioning service were lack of spatial information and did not take the accessibility of products into account, this paper established an evaluation model to simulate the spatial distribution of ecosystem provisioning service and its economic value, based on ArcGIS 9. 2 and taking the supply and demand factors of ecosystem products into account. The provision of timber product in Laojunshan in 2000 was analyzed with the model. In 2000, the total physical quantity of the timber' s provisioning service in Laojunshan was 11.12 x 10(4) m3 x a(-1), occupying 3.2% of the total increment of timber stock volume. The total provisioning service value of timber was 6669.27 x 10(4) yuan, among which, coniferous forest contributed most (90.41%). Due to the denser distribution of populations and roads in the eastern area of Laojunshan, some parts of the area being located outside of conservancy district, and forests being in scattered distribution, the spatial distribution pattern of the physical quantity of timber's provisioning service was higher in the eastern than in the western area.
NASA Astrophysics Data System (ADS)
Sadi, Toufik; Mehonic, Adnan; Montesi, Luca; Buckwell, Mark; Kenyon, Anthony; Asenov, Asen
2018-02-01
We employ an advanced three-dimensional (3D) electro-thermal simulator to explore the physics and potential of oxide-based resistive random-access memory (RRAM) cells. The physical simulation model has been developed recently, and couples a kinetic Monte Carlo study of electron and ionic transport to the self-heating phenomenon while accounting carefully for the physics of vacancy generation and recombination, and trapping mechanisms. The simulation framework successfully captures resistance switching, including the electroforming, set and reset processes, by modeling the dynamics of conductive filaments in the 3D space. This work focuses on the promising yet less studied RRAM structures based on silicon-rich silica (SiO x ) RRAMs. We explain the intrinsic nature of resistance switching of the SiO x layer, analyze the effect of self-heating on device performance, highlight the role of the initial vacancy distributions acting as precursors for switching, and also stress the importance of using 3D physics-based models to capture accurately the switching processes. The simulation work is backed by experimental studies. The simulator is useful for improving our understanding of the little-known physics of SiO x resistive memory devices, as well as other oxide-based RRAM systems (e.g. transition metal oxide RRAMs), offering design and optimization capabilities with regard to the reliability and variability of memory cells.
An approach for modelling snowcover ablation and snowmelt runoff in cold region environments
NASA Astrophysics Data System (ADS)
Dornes, Pablo Fernando
Reliable hydrological model simulations are the result of numerous complex interactions among hydrological inputs, landscape properties, and initial conditions. Determination of the effects of these factors is one of the main challenges in hydrological modelling. This situation becomes even more difficult in cold regions due to the ungauged nature of subarctic and arctic environments. This research work is an attempt to apply a new approach for modelling snowcover ablation and snowmelt runoff in complex subarctic environments with limited data while retaining integrity in the process representations. The modelling strategy is based on the incorporation of both detailed process understanding and inputs along with information gained from observations of basin-wide streamflow phenomenon; essentially a combination of deductive and inductive approaches. The study was conducted in the Wolf Creek Research Basin, Yukon Territory, using three models, a small-scale physically based hydrological model, a land surface scheme, and a land surface hydrological model. The spatial representation was based on previous research studies and observations, and was accomplished by incorporating landscape units, defined according to topography and vegetation, as the spatial model elements. Comparisons between distributed and aggregated modelling approaches showed that simulations incorporating distributed initial snowcover and corrected solar radiation were able to properly simulate snowcover ablation and snowmelt runoff whereas the aggregated modelling approaches were unable to represent the differential snowmelt rates and complex snowmelt runoff dynamics. Similarly, the inclusion of spatially distributed information in a land surface scheme clearly improved simulations of snowcover ablation. Application of the same modelling approach at a larger scale using the same landscape based parameterisation showed satisfactory results in simulating snowcover ablation and snowmelt runoff with minimal calibration. Verification of this approach in an arctic basin illustrated that landscape based parameters are a feasible regionalisation framework for distributed and physically based models. In summary, the proposed modelling philosophy, based on the combination of an inductive and deductive reasoning, is a suitable strategy for reliable predictions of snowcover ablation and snowmelt runoff in cold regions and complex environments.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Comparing NEO Search Telescopes
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2016-04-01
Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe.
Gao, Changwei; Liu, Xiaoming; Chen, Hai
2017-08-22
This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.
NASA Astrophysics Data System (ADS)
Zhang, S.; Wang, Y.; Ju, H.
2017-12-01
The interprovincial terrestrial physical geographical entities are the key areas of regional integrated management. Based on toponomy dictionaries and different thematic maps, the attributes and the spatial extent of the interprovincial terrestrial physical geographical names (ITPGN, including terrain ITPGN and water ITPGN) were extracted. The coefficient of variation and Moran's I were combined together to measure the spatial variation and spatial association of ITPGN. The influencing factors of the distribution of ITPGN and the implications for the regional management were further discussed. The results showed that 11325 ITPGN were extracted, including 7082 terrain ITPGN and 4243 water ITPGN. Hunan Province had the largest number of ITPGN in China, and Shanghai had the smallest number. The spatial variance of the terrain ITPGN was larger than that of the water ITPGN, and the ITPGN showed a significant agglomeration phenomenon in the southern part of China. Further analysis showed that the number of ITPGN was positively related with the relative elevation and the population where the relative elevation was lower than 2000m and the population was less than 50 million. But the number of ITPGN showed a negative relationship with the two factors when their values became larger, indicating a large number of unnamed entities existed in complex terrain areas and a decreasing number of terrestrial physical geographical entities in densely populated area. Based on these analysis, we suggest the government take the ITPGN as management units to realize a balance development between different parts of the entities and strengthen the geographical names census and the nomination of unnamed interprovincial physical geographical entities. This study also demonstrated that the methods of literature survey, coefficient of variation and Moran's I can be combined to enhance the understanding of the spatial pattern of ITPGN.
NASA Astrophysics Data System (ADS)
Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.
2015-12-01
Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.
NASA Astrophysics Data System (ADS)
Le Pichon, C.; Belliard, J.; Talès, E.; Gorges, G.; Clément, F.
2009-12-01
Most of the rivers of the Ile de France region, intimately linked with the megalopolis of Paris, are severely altered and freshwater fishes are exposed to habitat alteration, reduced connectivity and pollution. Several species thus present fragmented distributions and decreasing densities. In this context, the European Water Framework Directive (2000) has goals of hydrosystems rehabilitation and no further damage. In particular, the preservation and restoration of ecological connectivity of river networks is a key element for fish populations. These goals require the identification of natural and anthropological factors which influence the spatial distribution of species. We have proposed a riverscape approach, based on landscape ecology concepts, combined with a set of spatial analysis methods to assess the multiscale relationships between the spatial pattern of fish habitats and processes depending on fish movements. In particular, we used this approach to test the relative roles of spatial arrangement of fish habitats and the presence of physical barriers in explaining fish spatial distributions in a small rural watershed (106 km2). We performed a spatially continuous analysis of fish-habitat relationships. Fish habitats and physical barriers were mapped along the river network (33 km) with a GPS and imported into a GIS. In parallel, a longitudinal electrofishing survey of the distribution and abundance of fishes was made using a point abundance sampling scheme. Longitudinal arrangement of fish habitats were evaluated using spatial analysis methods: patch/distance metrics and moving window analysis. Explanatory models were developed to test the relative contribution of local environmental variables and spatial context in explaining fish presence. We have recorded about 100 physical barriers, on average one every 330 meters; most artificial barriers were road pipe culverts, falls associated with ponds and sluice gates. Contrasted fish communities and densities were observed in the different areas of the watershed, related to various land use (riparian forest or agriculture). The first results of fish-habitat association analysis on a 5 km stream are that longitudinal distribution of fish species was mainly impacted by falls associated with ponds. The impact was both due to the barrier effect and to the modification of aquatic habitats. Abundance distribution of Salmo trutta and Cottus gobio was particularly affected. Spatially continuous analysis of fish-habitat relationships allowed us to identify the relative impacts of habitat alteration and presence of physical barriers to fish movements. These techniques could help prioritize preservation and restoration policies in human-impacted watersheds, in particular, identifying the key physical barriers to remove.
Application of sunlight and lamps for plant irradiation in space bases
NASA Astrophysics Data System (ADS)
Sager, J. C.; Wheeler, R. M.
The radiation sources used for plant growth on a space base must meet the biological requirements for photosynthesis and photomorphogensis. In addition the sources must be energy and volume efficient, while maintaining the required irradiance levels, spectral, spatial and temporal distribution. These requirements are not easily met, but as the biological and mission requirements are better defined, then specific facility designs can begin to accommodate both the biological requirements and the physical limitations of a space based plant growth system.
Application of sunlight and lamps for plant irradiation in space bases
NASA Technical Reports Server (NTRS)
Sager, J. C.; Wheeler, R. M.
1992-01-01
The radiation sources used for plant growth on a space base must meet the biological requirements for photosynthesis and photomorphogenesis. In addition, the sources must be energy and volume efficient, while maintaining the required irradiance levels, spectral, spatial and temporal distribution. These requirements are not easily met, but as the biological and mission requirements are better defined, then specific facility designs can begin to accommodate both the biological requirements and the physical limitations of a space-based plant growth system.
Morse, Tim F; Warren, Nicholas; Dillon, Charles; Diva, Ulysses
2007-05-01
Risk factors for upper-extremity musculoskeletal disorders (MSD) include biomechanical factors (force, repetition, posture) and psychosocial factors (job stress). A population-based telephone survey of workers in Connecticut characterized these risk factors by industry, occupation, gender, and age. Risk factors were highly prevalent in the Connecticut workplace, but varied considerably by industry, occupation, gender, and age. Risk factors clustered based on (a) physically active occupations/industries (pushing/pulling, reaching, bent wrists, and tool use), (b) physically passive occupations/industries (static postures, stress, and computer use), and (c) repetitive motion exposures. Physically active patterns had the highest prevalence in construction/agriculture/mining, followed by (in order) wholesale/retail trade, utilities, manufacturing, services, government, and finance/insurance. Physically passive patterns tended to reverse this order, and repetitive motion followed a third pattern. Physically active risk factors were typically higher for males, though this varied by industry and occupation. All risk factors except for stress show a steady decrease with age. Almost 1,000,000 Connecticut workers are estimated to be exposed to repetitive work, bent wrists, and job stress. Workers in high exposure industries and occupations should be closely evaluated for risks, with outreach to industries for preventive ergonomic interventions as preferred to treatment for conditions that arise.
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Cortes Arevalo, Juliette; Alfonso, Leonardo; Wehn, Uta; Norbiato, Daniele; Monego, Martina; Ferri, Michele; Solomatine, Dimitri
2017-04-01
In the past years, a number of methods have been proposed to reduce uncertainty in flood prediction by means of model updating techniques. Traditional physical observations are usually integrated into hydrological and hydraulic models to improve model performances and consequent flood predictions. Nowadays, low-cost sensors can be used for crowdsourced observations. Different type of social sensors can measure, in a more distributed way, physical variables such as precipitation and water level. However, these crowdsourced observations are not integrated into a real-time fashion into water-system models due to their varying accuracy and random spatial-temporal coverage. We assess the effect in model performance due to the assimilation of crowdsourced observations of water level. Our method consists in (1) implementing a Kalman filter into a cascade of hydrological and hydraulic models. (2) defining observation errors depending on the type of sensor either physical or social. Randomly distributed errors are based on accuracy ranges that slightly improve according to the citizens' expertise level. (3) Using a simplified social model to realistically represent citizen engagement levels based on population density and citizens' motivation scenarios. To test our method, we synthetically derive crowdsourced observations for different citizen engagement levels from a distributed network of physical and social sensors. The observations are assimilated during a particular flood event occurred in the Bacchiglione catchment, Italy. The results of this study demonstrate that sharing crowdsourced water level observations (often motivated by a feeling of belonging to a community of friends) can help in improving flood prediction. On the other hand, a growing participation of individual citizens or weather enthusiasts sharing hydrological observations in cities can help to improve model performance. This study is a first step to assess the effects of crowdsourced observations in flood model predictions. Effective communication and feedback about the quality of observations from water authorities to engaged citizens are further required to minimize their intrinsic low-variable accuracy.
Quasar Spectral Energy Distributions As A Function Of Physical Property
NASA Astrophysics Data System (ADS)
Townsend, Shonda; Ganguly, R.; Stark, M. A.; Derseweh, J. A.; Richmond, J. M.
2012-05-01
Galaxy evolution models have shown that quasars are a crucial ingredient in the evolution of massive galaxies. Outflows play a key role in the story of quasars and their host galaxies, by helping regulate the accretion process, the star-formation rate and mass of the host galaxy (i.e., feedback). The prescription for modeling outflows as a contributor to feedback requires knowledge of the outflow velocity, geometry, and column density. In particular, we need to understand how these depend on physical parameters and how much is determined stochastically (and with what distribution). In turn, models of outflows have shown particular sensitivity to the shape of the spectral energy distribution (SED), depending on the UV luminosity to transfer momentum to the gas, the X-ray luminosity to regulate how efficiently that transfer can be, etc. To investigate how SED changes with physical properties, we follow up on Richards et al. (2006), who constructed SEDs with varying luminosity. Here, we construct SEDs as a function of redshift, and physical property (black hole mass, bolometric luminosity, Eddington ratio) for volume limited samples drawn from the Sloan Digital Sky Survey, with photometry supplemented from 2MASS, WISE, GALEX, ROSAT, and Chandra. To estimate black hole masses, we adopt the scaling relations from Greene & Ho (2005) based on the H-alpha emission line FWHM. This requires redshifts less than 0.4. To construct volume-limited subsamples, we begin by adopting g=19.8 as a nominal limiting magnitude over which we are guaranteed to detect z<0.4 quasars. At redshift 0.4, we are complete down to Mg=-21.8, which yields 3300 objects from Data Release 7. At z=0.1, we are complete down to Mg=-18.5. This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. 09-ADP09-0016 issued through the Astrophysics Data Analysis Program.
Second Law based definition of passivity/activity of devices
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle M.; Ferry, David K.; Kish, Laszlo B.
2017-10-01
Recently, our efforts to clarify the old question, if a memristor is a passive or active device [1], triggered debates between engineers, who have had advanced definitions of passivity/activity of devices, and physicists with significantly different views about this seemingly simple question. This debate triggered our efforts to test the well-known engineering concepts about passivity/activity in a deeper way, challenging them by statistical physics. It is shown that the advanced engineering definition of passivity/activity of devices is self-contradictory when a thermodynamical system executing Johnson-Nyquist noise is present. A new, statistical physical, self-consistent definition based on the Second Law of Thermodynamics is introduced. It is also shown that, in a system with uniform temperature distribution, any rectifier circuitry that can rectify thermal noise must contain an active circuit element, according to both the engineering and statistical physical definitions.
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
NASA Astrophysics Data System (ADS)
Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.
2018-04-01
Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.
Distributed generation of shared RSA keys in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Liu, Yi-Liang; Huang, Qin; Shen, Ying
2005-12-01
Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.
An educational distributed Cosmic Ray detector network based on ArduSiPM
NASA Astrophysics Data System (ADS)
Bocci, V.; Chiodi, G.; Fresch, P.; Iacoangeli, F.; Recchia, L.
2017-10-01
The advent of high performance microcontrollers equipped with analog and digital peripherals, makes the design of a complete particle detector and a relative acquisition system on a single microcontroller chip possible. The existence of a world wide data infrastructure such as the internet, allows for the conception of a distributed network of cheap detectors able to elaborate and send data as well as to respond to setting commands. The internet infrastructure enables the distribution of the absolute time, with precision of a few milliseconds, to all devices independently of their physical location, when the sky view is accessible it possible to use a GPS module to reach synchronization of tens of nanoseconds. These devices can be far apart from each other and their relative distance can range from a few meters to thousands of kilometers. This allows for the design of a crowdsourcing experiment of citizen science, based on the use of many small scintillation-based particle detectors to monitor the high energetic cosmic ray and the radiation environment.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1975-01-01
Instrument is based on inverse solution ot equations for light scattered by a transparent medium. Measurements are taken over several angles of incidence rather than over several frequencies. Measurements can be used to simultaneously determine chemical and physical properties of particles in mixed gas or liquid.
Digital Talking Books: Planning for the Future.
ERIC Educational Resources Information Center
Cookson, John; Cylke, Frank Kurt; Dixon, Judith; Fistick, Robert E.; Fitzpatrick, Vicki; Kormann, Wells B.; Moodie, Michael M.; Redmond, Linda; Thuronyi, George
This report describes the plans of the National Library Service for the Blind and Physically Handicapped (NLS) to convert their talking books service to a digitally based audio system. The NLS program selects and produces full-length books and magazines in braille and on recorded disc and cassettes and distributes them to a cooperating network of…
Memristive Properties of Thin Film Cuprous Oxide
2011-03-01
Equation Chapter 1 Section 1 MEMRISTIVE PROPERTIES OF THIN FILM CUPROUS OXIDE THESIS Brett C...Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED The views expressed in this thesis are those of the...MEMRISTIVE PROPERTIES OF THIN FILM CUPROUS OXIDE THESIS Presented to the Faculty Department of Engineering Physics Graduate School of
Optical disk processing of solar images.
NASA Astrophysics Data System (ADS)
Title, A.; Tarbell, T.
The current generation of space and ground-based experiments in solar physics produces many megabyte-sized image data arrays. Optical disk technology is the leading candidate for convenient analysis, distribution, and archiving of these data. The authors have been developing data analysis procedures which use both analog and digital optical disks for the study of solar phenomena.
E-Business in Education. What You Need To Know: Building Competencies for Tomorrow's Opportunities.
ERIC Educational Resources Information Center
Norris, Donald M.; Olson, Mark A.
This guidebook is based on the belief that e-business applications will transform academia and academic support experiences, with learners participating in distributed learning environments that mix physical and virtual learning resources in many combinations, and it offers insights into the strategies and planning needed to develop a college…
Cyber-Physical Trade-Offs in Distributed Detection Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.
2010-01-01
We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less
Hydrometeorological Analysis of Flooding Events in San Antonio, TX
NASA Astrophysics Data System (ADS)
Chintalapudi, S.; Sharif, H.; Elhassan, A.
2008-12-01
South Central Texas is particularly vulnerable to floods due to: proximity to a moist air source (the Gulf of Mexico); the Balcones Escarpment, which concentrates rainfall runoff; a tendency for synoptic scale features to become cut-off and stall over the area; and decaying tropical cyclones stalling over the area. The San Antonio Metropolitan Area is the 7th largest city in the nation, one of the most flash-flood prone regions in North America, and has experienced a number of flooding events in the last decade (1998, 2002, 2004, and 2007). Research is being conducted to characterize the meteorological conditions that lead to these events and apply the rainfall and watershed characteristics data to recreate the runoff events using a two- dimensional, physically-based, distributed-parameter hydrologic model. The physically based, distributed-parameter Gridded Surface Subsurface Hydrologic Analysis (GSSHA) hydrological model was used for simulating the watershed response to these storm events. Finally observed discharges were compared to GSSHA model discharges for these storm events. Analysis of the some of these events will be presented.
Soltanian, Mohamad Reza; Ritzi, Robert W; Dai, Zhenxue; Huang, Chao Cheng
2015-03-01
Physical and chemical heterogeneities have a large impact on reactive transport in porous media. Examples of heterogeneous attributes affecting reactive mass transport are the hydraulic conductivity (K), and the equilibrium sorption distribution coefficient (Kd). This paper uses the Deng et al. (2013) conceptual model for multimodal reactive mineral facies and a Lagrangian-based stochastic theory in order to analyze the reactive solute dispersion in three-dimensional anisotropic heterogeneous porous media with hierarchical organization of reactive minerals. An example based on real field data is used to illustrate the time evolution trends of reactive solute dispersion. The results show that the correlation between the hydraulic conductivity and the equilibrium sorption distribution coefficient does have a significant effect on reactive solute dispersion. The anisotropy ratio does not have a significant effect on reactive solute dispersion. Furthermore, through a sensitivity analysis we investigate the impact of changing the mean, variance, and integral scale of K and Kd on reactive solute dispersion. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Wigner distribution and 2D classical maps
NASA Astrophysics Data System (ADS)
Sakhr, Jamal
2017-07-01
The Wigner spacing distribution has a long and illustrious history in nuclear physics and in the quantum mechanics of classically chaotic systems. In this paper, a novel connection between the Wigner distribution and 2D classical mechanics is introduced. Based on a well-known correspondence between the Wigner distribution and the 2D Poisson point process, the hypothesis that typical pseudo-trajectories of a 2D ergodic map have a Wignerian nearest-neighbor spacing distribution (NNSD) is put forward and numerically tested. The standard Euclidean metric is used to compute the interpoint spacings. In all test cases, the hypothesis is upheld, and the range of validity of the hypothesis appears to be robust in the sense that it is not affected by the presence or absence of: (i) mixing; (ii) time-reversal symmetry; and/or (iii) dissipation.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-10-01
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Physical Activity and Body Mass Index
Nelson, Candace C.; Wagner, Gregory R.; Caban-Martinez, Alberto J.; Buxton, Orfeu M.; Kenwood, Christopher T.; Sabbath, Erika L.; Hashimoto, Dean M.; Hopcia, Karen; Allen, Jennifer; Sorensen, Glorian
2014-01-01
Background The workplace is an important domain for adults, and many effective interventions targeting physical activity and weight reduction have been implemented in the workplace. However, the U.S. workforce is aging and few studies have examined the relationship of BMI, physical activity, and age as they relate to workplace characteristics. Purpose This paper reports on the distribution of physical activity and BMI by age in a population of hospital-based healthcare workers and investigates the relationships among workplace characteristics, physical activity, and BMI. Methods Data from a survey of patient care workers in two large academic hospitals in the Boston area were collected in late 2009 and analyzed in early 2013. Results In multivariate models, workers reporting greater decision latitude (OR=1.02; 95% CI=1.01, 1.03) and job flexibility (OR=1.05; 95% CI=1.01, 1.10) reported greater physical activity. Overweight and obesity increased with age (p<0.01), even after adjusting for workplace characteristics. Sleep deficiency (OR=1.56; 95% CI=1.15, 2.12) and workplace harassment (OR= 1.62; 95% CI=1.20, 2.18) were also associated with obesity. Conclusions These findings underscore the persistent impact of the work environment for workers of all ages. Based on these results, programs or policies aimed at improving the work environment, especially decision latitude, job flexibility and workplace harassment should be included in the design of worksite-based health promotion interventions targeting physical activity or obesity. PMID:24512930
Between disorder and order: A case study of power law
NASA Astrophysics Data System (ADS)
Cao, Yong; Zhao, Youjie; Yue, Xiaoguang; Xiong, Fei; Sun, Yongke; He, Xin; Wang, Lichao
2016-08-01
Power law is an important feature of phenomena in long memory behaviors. Zipf ever found power law in the distribution of the word frequencies. In physics, the terms order and disorder are Thermodynamic or statistical physics concepts originally and a lot of research work has focused on self-organization of the disorder ingredients of simple physical systems. It is interesting what make disorder-order transition. We devise an experiment-based method about random symbolic sequences to research regular pattern between disorder and order. The experiment results reveal power law is indeed an important regularity in transition from disorder to order. About these results the preliminary study and analysis has been done to explain the reasons.
2016-01-01
The muriqui (Brachyteles spp.), endemic to the Atlantic Forest of Brazil, is the largest primate in South America and is endangered, mainly due to habitat loss. Its distribution limits are still uncertain and need to be resolved in order to determine their true conservation status. Species distribution modeling (SDM) has been used to estimate potential species distributions, even when information is incomplete. Here, we developed an environmental suitability model for the two endangered species of muriqui (Brachyteles hypoxanthus and B. arachnoides) using Maxent software. Due to historical absence of muriquis, areas with predicted high habitat suitability yet historically never occupied, were excluded from the predicted historical distribution. Combining that information with the model, it is evident that rivers are potential dispersal barriers for the muriquis. Moreover, although the two species are environmentally separated in a large part of its distribution, there is a potential contact zone where the species apparently do not overlap. This separation might be due to either a physical (i.e., Serra da Mantiqueira mountains) or a biotic barrier (the species exclude one another). Therefore, in addition to environmental characteristics, physical and biotic barriers potentially shaped the limits of the muriqui historical range. Based on these considerations, we proposed the adjustment of their historical distributional limits. Currently only 7.6% of the predicted historical distribution of B. hypoxanthus and 12.9% of B. arachnoides remains forested and able to sustain viable muriqui populations. In addition to measurement of habitat loss we also identified areas for conservation concern where new muriqui populations might be found. PMID:26943910
A pilot study of physical activity and sedentary behavior distribution patterns in older women.
Fortune, Emma; Mundell, Benjamin; Amin, Shreyasee; Kaufman, Kenton
2017-09-01
The study aims were to investigate free-living physical activity and sedentary behavior distribution patterns in a group of older women, and assess the cross-sectional associations with body mass index (BMI). Eleven older women (mean (SD) age: 77 (9) yrs) wore custom-built activity monitors, each containing a tri-axial accelerometer (±16g, 100Hz), on the waist and ankle for lab-based walking trials and 4 days in free-living. Daily active time, step counts, cadence, and sedentary break number were estimated from acceleration data. The sedentary bout length distribution and sedentary time accumulation pattern, using the Gini index, were investigated. Associations of the parameters' total daily values and coefficients of variation (CVs) of their hourly values with BMI were assessed using linear regression. The algorithm demonstrated median sensitivity, positive predictive value, and agreement values >98% and <1% mean error in cadence calculations with video identification during lab trials. Participants' sedentary bouts were found to be power law distributed with 56% of their sedentary time occurring in 20min bouts or longer. Meaningful associations were detectable in the relationships of total active time, step count, sedentary break number and their CVs with BMI. Active time and step counts had moderate negative associations with BMI while sedentary break number had a strong negative association. Active time, step count and sedentary break number CVs also had strong positive associations with BMI. The results highlight the importance of measuring sedentary behavior and suggest a more even distribution of physical activity throughout the day is associated with lower BMI. Copyright © 2017 Elsevier B.V. All rights reserved.
37 CFR 385.3 - Royalty rates for making and distributing phonorecords.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR USE OF MUSICAL WORKS UNDER COMPULSORY LICENSE FOR MAKING AND DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads and Ringtones § 385.3 Royalty rates for making and distributing...
37 CFR 385.3 - Royalty rates for making and distributing phonorecords.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR USE OF MUSICAL WORKS UNDER COMPULSORY LICENSE FOR MAKING AND DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads and Ringtones § 385.3 Royalty rates for making and distributing...
37 CFR 385.3 - Royalty rates for making and distributing phonorecords.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR USE OF MUSICAL WORKS UNDER COMPULSORY LICENSE FOR MAKING AND DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads and Ringtones § 385.3 Royalty rates for making and distributing...
37 CFR 385.3 - Royalty rates for making and distributing phonorecords.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR USE OF MUSICAL WORKS UNDER COMPULSORY LICENSE FOR MAKING AND DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads and Ringtones § 385.3 Royalty rates for making and distributing...
37 CFR 385.3 - Royalty rates for making and distributing phonorecords.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR USE OF MUSICAL WORKS UNDER COMPULSORY LICENSE FOR MAKING AND DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads and Ringtones § 385.3 Royalty rates for making and distributing...
Bian, Xun; Shi, Fu-Ming
2015-11-12
One new species of Acosmetura (Orthoptera: Tettigoniidae: Meconematinae) is described from China, namely Acosmetura listrica Bian & Shi, sp. nov. and distinctive characters are illustrated. In addition, a key to the known species with morphological photographs of Acosmetura longicercata Liu, Zhou & Bi, 2008 from Tianmushan, Zhejiang is provided in this paper. Based on the comprehensive physical geographical regionalization, Acosmetura listrica sp. nov. is distributed in Huinan and the middle and lower reaches of Changjiang River, which belongs to the Northern Subtropical Humid Climate Zone.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
Continuous-variable quantum key distribution protocols over noisy channels.
García-Patrón, Raúl; Cerf, Nicolas J
2009-04-03
A continuous-variable quantum key distribution protocol based on squeezed states and heterodyne detection is introduced and shown to attain higher secret key rates over a noisy line than any other one-way Gaussian protocol. This increased resistance to channel noise can be understood as resulting from purposely adding noise to the signal that is converted into the secret key. This notion of noise-enhanced tolerance to noise also provides a better physical insight into the poorly understood discrepancies between the previously defined families of Gaussian protocols.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rafkin, Scot C. R.; Soto, Alejandro; Michaels, Timothy I.
2016-10-01
A newly developed general circulation model (GCM) for Pluto is used to investigate the impact of a heterogeneous distribution of nitrogen surface ice and large scale topography on Pluto's atmospheric circulation. The GCM is based on the GFDL Flexible Modeling System (FSM). Physics include a gray model radiative-conductive scheme, subsurface conduction, and a nitrogen volatile cycle. The radiative-conductive model takes into account the 2.3, 3.3 and 7.8 μm bands of CH4 and CO, including non-local thermodynamic equilibrium effects. including non-local thermodynamic equilibrium effects. The nitrogen volatile cycle is based on a vapor pressure equilibrium assumption between the atmosphere and surface. Prior to the arrival of the New Horizons spacecraft, the expectation was that the volatile ice distribution on the surface of Pluto would be strongly controlled by the latitudinal temperature gradient. If this were the case, then Pluto would have broad latitudinal bands of both ice covered surface and ice free surface, as dictated by the season. Further, the circulation, and the thus the transport of volatiles, was thought to be driven almost exclusively by sublimation and deposition flows associated with the volatile cycle. In contrast to expectations, images from New Horizon showed an extremely complex, heterogeneous distribution of surface ices draped over substantial and variable topography. To produce such an ice distribution, the atmospheric circulation and volatile transport must be more complex than previously envisioned. Simulations where topography, surface ice distributions, and volatile cycle physics are added individually and in various combinations are used to individually quantify the importance of the general circulation, topography, surface ice distributions, and condensation flows. It is shown that even regional patches of ice or large craters can have global impacts on the atmospheric circulation, the volatile cycle, and hence, the distribution of surface ices. The work demonstrates that explaining Pluto's volatile cycle and the expression of that cycle in the surface ice distributions requires consideration of atmospheric processes beyond simple vapor pressure equilibrium arguments.
A journey into medical physics as viewed by a physicist
NASA Astrophysics Data System (ADS)
Gueye, Paul
2007-03-01
The world of physics is usually linked to a large variety of subjects spanning from astrophysics, nuclear/high energy physics, materials and optical sciences, plasma physics etc. Lesser is known about the exciting world of medical physics that includes radiation therapy physics, medical diagnostic and imaging physics, nuclear medicine physics, and medical radiation safety. These physicists are typically based in hospital departments of radiation oncology or radiology, and provide technical support for patient diagnosis and treatment in a clinical environment. This talk will focus on providing a bridge between selected areas of physics and their medical applications. The journey will first start from our understanding of high energy beam production and transport beamlines for external beam treatment of diseases (e.g., electron, gamma, X-ray and proton machines) as they relate to accelerator physics. We will then embrace the world of nuclear/high energy physics where detectors development provide a unique tool for understanding low energy beam distribution emitted from radioactive sources used in Brachytherapy treatment modality. Because the ultimate goal of radiation based therapy is its killing power on tumor cells, the next topic will be microdosimetry where responses of biological systems can be studied via electromagnetic systems. Finally, the impact on the imaging world will be embraced using tools heavily used in plasma physics, fluid mechanics and Monte Carlo simulations. These various scientific areas provide unique opportunities for faculty and students at universities, as well as for staff from research centers and laboratories to contribute in this field. We will conclude with the educational training related to medical physics programs.
The Practicality of Statistical Physics Handout Based on KKNI and the Constructivist Approach
NASA Astrophysics Data System (ADS)
Sari, S. Y.; Afrizon, R.
2018-04-01
Statistical physics lecture shows that: 1) the performance of lecturers, social climate, students’ competence and soft skills needed at work are in enough category, 2) students feel difficulties in following the lectures of statistical physics because it is abstract, 3) 40.72% of students needs more understanding in the form of repetition, practice questions and structured tasks, and 4) the depth of statistical physics material needs to be improved gradually and structured. This indicates that learning materials in accordance of The Indonesian National Qualification Framework or Kerangka Kualifikasi Nasional Indonesia (KKNI) with the appropriate learning approach are needed to help lecturers and students in lectures. The author has designed statistical physics handouts which have very valid criteria (90.89%) according to expert judgment. In addition, the practical level of handouts designed also needs to be considered in order to be easy to use, interesting and efficient in lectures. The purpose of this research is to know the practical level of statistical physics handout based on KKNI and a constructivist approach. This research is a part of research and development with 4-D model developed by Thiagarajan. This research activity has reached part of development test at Development stage. Data collection took place by using a questionnaire distributed to lecturers and students. Data analysis using descriptive data analysis techniques in the form of percentage. The analysis of the questionnaire shows that the handout of statistical physics has very practical criteria. The conclusion of this study is statistical physics handouts based on the KKNI and constructivist approach have been practically used in lectures.
Qin, Yuan; Michalowski, Andreas; Weber, Rudolf; Yang, Sen; Graf, Thomas; Ni, Xiaowu
2012-11-19
Ray-tracing is the commonly used technique to calculate the absorption of light in laser deep-penetration welding or drilling. Since new lasers with high brilliance enable small capillaries with high aspect ratios, diffraction might become important. To examine the applicability of the ray-tracing method, we studied the total absorptance and the absorbed intensity of polarized beams in several capillary geometries. The ray-tracing results are compared with more sophisticated simulations based on physical optics. The comparison shows that the simple ray-tracing is applicable to calculate the total absorptance in triangular grooves and in conical capillaries but not in rectangular grooves. To calculate the distribution of the absorbed intensity ray-tracing fails due to the neglected interference, diffraction, and the effects of beam propagation in the capillaries with sub-wavelength diameter. If diffraction is avoided e.g. with beams smaller than the entrance pupil of the capillary or with very shallow capillaries, the distribution of the absorbed intensity calculated by ray-tracing corresponds to the local average of the interference pattern found by physical optics.
Interseismic Coupling-Based Earthquake and Tsunami Scenarios for the Nankai Trough
NASA Astrophysics Data System (ADS)
Baranes, H.; Woodruff, J. D.; Loveless, J. P.; Hyodo, M.
2018-04-01
Theoretical modeling and investigations of recent subduction zone earthquakes show that geodetic estimates of interseismic coupling and the spatial distribution of coseismic rupture are correlated. However, the utility of contemporary coupling in guiding construction of rupture scenarios has not been evaluated on the world's most hazardous faults. Here we demonstrate methods for scaling coupling to slip to create rupture models for southwestern Japan's Nankai Trough. Results show that coupling-based models produce distributions of ground surface deformation and tsunami inundation that are similar to historical and geologic records of the largest known Nankai earthquake in CE 1707 and to an independent, quasi-dynamic rupture model. Notably, these models and records all support focused subsidence around western Shikoku that makes the region particularly vulnerable to flooding. Results imply that contemporary coupling mirrors the slip distribution of a full-margin, 1707-type rupture, and Global Positioning System measurements of surface motion are connected with the trough's physical characteristics.
Practical private database queries based on a quantum-key-distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobi, Markus; Humboldt-Universitaet zu Berlin, D-10117 Berlin; Simon, Christoph
2011-02-15
Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions inmore » order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.« less
Riding the Right Wavelet: Quantifying Scale Transitions in Fractured Rocks
NASA Astrophysics Data System (ADS)
Rizzo, Roberto E.; Healy, David; Farrell, Natalie J.; Heap, Michael J.
2017-12-01
The mechanics of brittle failure is a well-described multiscale process that involves a rapid transition from distributed microcracks to localization along a single macroscopic rupture plane. However, considerable uncertainty exists regarding both the length scale at which this transition occurs and the underlying causes that prompt this shift from a distributed to a localized assemblage of cracks or fractures. For the first time, we used an image analysis tool developed to investigate orientation changes at different scales in images of fracture patterns in faulted materials, based on a two-dimensional continuous wavelet analysis. We detected the abrupt change in the fracture pattern from distributed tensile microcracks to localized shear failure in a fracture network produced by triaxial deformation of a sandstone core plug. The presented method will contribute to our ability of unraveling the physical processes at the base of catastrophic rock failure, including the nucleation of earthquakes, landslides, and volcanic eruptions.
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yunkun; Han, Zhanwen, E-mail: hanyk@ynao.ac.cn, E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has beenmore » performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.« less
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Kuykendoll, K.; Rhew, R.; Jones, S.
2004-01-01
This paper describes the avian wing geometry (Seagull, Merganser, Teal and Owl) extracted from non-contact surface measurements using a three-dimensional laser scanner. The geometric quantities, including the camber line and thickness distribution of airfoil, wing planform, chord distribution, and twist distribution, are given in convenient analytical expressions. Thus, the avian wing surfaces can be generated and the wing kinematics can be simulated. The aerodynamic characteristics of avian airfoils in steady inviscid flows are briefly discussed. The avian wing kinematics is recovered from videos of three level-flying birds (Crane, Seagull and Goose) based on a two-jointed arm model. A flapping seagull wing in the 3D physical space is re-constructed from the extracted wing geometry and kinematics.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
Open star clusters and Galactic structure
NASA Astrophysics Data System (ADS)
Joshi, Yogesh C.
2018-04-01
In order to understand the Galactic structure, we perform a statistical analysis of the distribution of various cluster parameters based on an almost complete sample of Galactic open clusters yet available. The geometrical and physical characteristics of a large number of open clusters given in the MWSC catalogue are used to study the spatial distribution of clusters in the Galaxy and determine the scale height, solar offset, local mass density and distribution of reddening material in the solar neighbourhood. We also explored the mass-radius and mass-age relations in the Galactic open star clusters. We find that the estimated parameters of the Galactic disk are largely influenced by the choice of cluster sample.
Research and proposal on selective catalytic reduction reactor optimization for industrial boiler.
Yang, Yiming; Li, Jian; He, Hong
2017-08-24
The advanced computational fluid dynamics (CFD) software STAR-CCM+ was used to simulate a denitrification (De-NOx) project for a boiler in this paper, and the simulation result was verified based on a physical model. Two selective catalytic reduction (SCR) reactors were developed: reactor 1 was optimized and reactor 2 was developed based on reactor 1. Various indicators, including gas flow field, ammonia concentration distribution, temperature distribution, gas incident angle, and system pressure drop were analyzed. The analysis indicated that reactor 2 was of outstanding performance and could simplify developing greatly. Ammonia injection grid (AIG), the core component of the reactor, was studied; three AIGs were developed and their performances were compared and analyzed. The result indicated that AIG 3 was of the best performance. The technical indicators were proposed for SCR reactor based on the study. Flow filed distribution, gas incident angle, and temperature distribution are subjected to SCR reactor shape to a great extent, and reactor 2 proposed in this paper was of outstanding performance; ammonia concentration distribution is subjected to ammonia injection grid (AIG) shape, and AIG 3 could meet the technical indicator of ammonia concentration without mounting ammonia mixer. The developments above on the reactor and the AIG are both of great application value and social efficiency.
Energy and enthalpy distribution functions for a few physical systems.
Wu, K L; Wei, J H; Lai, S K; Okabe, Y
2007-08-02
The present work is devoted to extracting the energy or enthalpy distribution function of a physical system from the moments of the distribution using the maximum entropy method. This distribution theory has the salient traits that it utilizes only the experimental thermodynamic data. The calculated distribution functions provide invaluable insight into the state or phase behavior of the physical systems under study. As concrete evidence, we demonstrate the elegance of the distribution theory by studying first a test case of a two-dimensional six-state Potts model for which simulation results are available for comparison, then the biphasic behavior of the binary alloy Na-K whose excess heat capacity, experimentally observed to fall in a narrow temperature range, has yet to be clarified theoretically, and finally, the thermally induced state behavior of a collection of 16 proteins.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Energy reconstruction in the long-baseline neutrino experiment.
Mosel, U; Lalakulich, O; Gallmeister, K
2014-04-18
The Long-Baseline Neutrino Experiment aims at measuring fundamental physical parameters to high precision and exploring physics beyond the standard model. Nuclear targets introduce complications towards that aim. We investigate the uncertainties in the energy reconstruction, based on quasielastic scattering relations, due to nuclear effects. The reconstructed event distributions as a function of energy tend to be smeared out and shifted by several 100 MeV in their oscillatory structure if standard event selection is used. We show that a more restrictive experimental event selection offers the possibility to reach the accuracy needed for a determination of the mass ordering and the CP-violating phase. Quasielastic-based energy reconstruction could thus be a viable alternative to the calorimetric reconstruction also at higher energies.
Low energy physical activity recognition system on smartphones.
Soria Morillo, Luis Miguel; Gonzalez-Abril, Luis; Ortega Ramirez, Juan Antonio; de la Concepcion, Miguel Angel Alvarez
2015-03-03
An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the χ2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy.
[Advance in researches on the effect of forest on hydrological process].
Zhang, Zhiqiang; Yu, Xinxiao; Zhao, Yutao; Qin, Yongsheng
2003-01-01
According to the effects of forest on hydrological process, forest hydrology can be divided into three related aspects: experimental research on the effects of forest changing on hydrological process quantity and water quality; mechanism study on the effects of forest changing on hydrological cycle, and establishing and exploitating physical-based distributed forest hydrological model for resource management and engineering construction. Orientation experiment research can not only support the first-hand data for forest hydrological model, but also make clear the precipitation-runoff mechanisms. Research on runoff mechanisms can be valuable for the exploitation and improvement of physical based hydrological models. Moreover, the model can also improve the experimental and runoff mechanism researches. A review of above three aspects are summarized in this paper.
Vucinić-Milanković, Nada; Savić, Snezana; Vuleta, Gordana; Vucinić, Slavica
2007-03-01
Two sugar-based emulsifiers, cetearyl alcohol & cetearyl glycoside and sorbitan stearate & sucrose cocoate, known as potential promoters of lamellar liquid crystals/gel phases, were investigated in order to formulate an optimal vehicle for amphiphilic drug - diclofenac diethylamine (DDA). Physico-chemical characterization and study of vehicle's physical stability were performed. Then, the in vitro DDA liberation profile, dependent on the mode of drug incorporation to the system, and the in vivo, short-term effects of chosen samples on skin parameters were examined. Droplets size distribution and rheological behavior indicated satisfying physical stability of both types of vehicles. Unexpectedly, the manner of DDA incorporation to the system had no significant influence on DDA release. In vivo study pointed to emulsion's favorable potential for skin hydration and barrier improvement, particularly in cetearyl glycoside-based vehicle.
Origin and transport of high energy particles in the galaxy
NASA Technical Reports Server (NTRS)
Wefel, John P.
1987-01-01
The origin, confinement, and transport of cosmic ray nuclei in the galaxy was studied. The work involves interpretations of the existing cosmic ray physics database derived from both balloon and satellite measurements, combined with an effort directed towards defining the next generation of instruments for the study of cosmic radiation. The shape and the energy dependence of the cosmic ray pathlength distribution in the galaxy was studied, demonstrating that the leaky box model is not a good representation of the detailed particle transport over the energy range covered by the database. Alternative confinement methods were investigated, analyzing the confinement lifetime in these models based upon the available data for radioactive secondary isotopes. The source abundances of several isotopes were studied using compiled nuclear physics data and the detailed transport calculations. The effects of distributed particle acceleration on the secondary to primary ratios were investigated.
Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Stevens, Andrew J.; Lian, Jianming
For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less
High-performance super capacitors based on activated anthracite with controlled porosity
NASA Astrophysics Data System (ADS)
Lee, Hyun-Chul; Byamba-Ochir, Narandalai; Shim, Wang-Geun; Balathanigaimani, M. S.; Moon, Hee
2015-02-01
Mongolian anthracite is chemically activated using potassium hydroxide as an activation agent to make activated carbon materials. Prior to the chemical activation, the chemical agent is introduced by two different methods as follows, (1) simple physical mixing, (2) impregnation. The physical properties such as specific surface area, pore volume, pore size distribution, and adsorption energy distribution are measured to assess them as carbon electrode materials for electric double-layer capacitors (EDLC). The surface functional groups and morphology are also characterized by X-ray photoelectron spectroscopy (XPS) and transmission electron microscopy (TEM) analyses respectively. The electrochemical results for the activated carbon electrodes in 3 M sulfuric acid electrolyte solution indicate that the activated Mongolian anthracite has relatively large specific capacitances in the range of 120-238 F g-1 and very high electrochemical stability, as they keep more than 98% of initial capacitances until 1000 charge/discharge cycles.
2013-01-01
Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011
NASA Astrophysics Data System (ADS)
Vallianatos, F.; Tzanis, A.; Michas, G.; Papadakis, G.
2012-04-01
Since the middle of summer 2011, an increase in the seismicity rates of the volcanic complex system of Santorini Island, Greece, was observed. In the present work, the temporal distribution of seismicity, as well as the magnitude distribution of earthquakes, have been studied using the concept of Non-Extensive Statistical Physics (NESP; Tsallis, 2009) along with the evolution of Shanon entropy H (also called information entropy). The analysis is based on the earthquake catalogue of the Geodynamic Institute of the National Observatory of Athens for the period July 2011-January 2012 (http://www.gein.noa.gr/). Non-Extensive Statistical Physics, which is a generalization of Boltzmann-Gibbs statistical physics, seems a suitable framework for studying complex systems. The observed distributions of seismicity rates at Santorini can be described (fitted) with NESP models to exceptionally well. This implies the inherent complexity of the Santorini volcanic seismicity, the applicability of NESP concepts to volcanic earthquake activity and the usefulness of NESP in investigating phenomena exhibiting multifractality and long-range coupling effects. Acknowledgments. This work was supported in part by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project entitled "Integrated understanding of Seismicity, using innovative Methodologies of Fracture mechanics along with Earthquake and non extensive statistical physics - Application to the geodynamic system of the Hellenic Arc. SEISMO FEAR HELLARC". GM and GP wish to acknowledge the partial support of the Greek State Scholarships Foundation (ΙΚΥ).
Geophysical Parameter Estimation of Near Surface Materials Using Nuclear Magnetic Resonance
NASA Astrophysics Data System (ADS)
Keating, K.
2017-12-01
Proton nuclear magnetic resonance (NMR), a mature geophysical technology used in petroleum applications, has recently emerged as a promising tool for hydrogeophysicists. The NMR measurement, which can be made in the laboratory, in boreholes, and using a surface based instrument, are unique in that it is directly sensitive to water, via the initial signal magnitude, and thus provides a robust estimate of water content. In the petroleum industry rock physics models have been established that relate NMR relaxation times to pore size distributions and permeability. These models are often applied directly for hydrogeophysical applications, despite differences in the material in these two environments (e.g., unconsolidated versus consolidated, and mineral content). Furthermore, the rock physics models linking NMR relaxation times to pore size distributions do not account for partially saturated systems that are important for understanding flow in the vadose zone. In our research, we are developing and refining quantitative rock physics models that relate NMR parameters to hydrogeological parameters. Here we highlight the limitations of directly applying established rock physics models to estimate hydrogeological parameters from NMR measurements, and show some of the successes we have had in model improvement. Using examples drawn from both laboratory and field measurements, we focus on the use of NMR in partial saturated systems to estimate water content, pore-size distributions, and the water retention curve. Despite the challenges in interpreting the measurements, valuable information about hydrogeological parameters can be obtained from NMR relaxation data, and we conclude by outlining pathways for improving the interpretation of NMR data for hydrogeophysical investigations.
Phase space effects on fast ion distribution function modeling in tokamaks
Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.; ...
2016-04-14
Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2011-10-01
This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.
NASA Astrophysics Data System (ADS)
Haneda, K.
2016-04-01
The purpose of this study was to estimate an impact on radical effect in the proton beams using a combined approach with physical data and gel data. The study used two dosimeters: ionization chambers and polymer gel dosimeters. Polymer gel dosimeters have specific advantages when compared to other dosimeters. They can measure chemical reaction and they are at the same time a phantom that can map in three dimensions continuously and easily. First, a depth-dose curve for a 210 MeV proton beam measured using an ionization chamber and a gel dosimeter. Second, the spatial distribution of the physical dose was calculated by Monte Carlo code system PHITS: To verify of the accuracy of Monte Carlo calculation, and the calculation results were compared with experimental data of the ionization chamber. Last, to evaluate of the rate of the radical effect against the physical dose. The simulation results were compared with the measured depth-dose distribution and showed good agreement. The spatial distribution of a gel dose with threshold LET value of proton beam was calculated by the same simulation code. Then, the relative distribution of the radical effect was calculated from the physical dose and gel dose. The relative distribution of the radical effect was calculated at each depth as the quotient of relative dose obtained using physical and gel dose. The agreement between the relative distributions of the gel dosimeter and Radical effect was good at the proton beams.
A precise clock distribution network for MRPC-based experiments
NASA Astrophysics Data System (ADS)
Wang, S.; Cao, P.; Shang, L.; An, Q.
2016-06-01
In high energy physics experiments, the MRPC (Multi-Gap Resistive Plate Chamber) detectors are widely used recently which can provide higher-resolution measurement for particle identification. However, the application of MRPC detectors leads to a series of challenges in electronics design with large number of front-end electronic channels, especially for distributing clock precisely. To deal with these challenges, this paper presents a universal scheme of clock transmission network for MRPC-based experiments with advantages of both precise clock distribution and global command synchronization. For precise clock distributing, the clock network is designed into a tree architecture with two stages: the first one has a point-to-multipoint long range bidirectional distribution with optical channels and the second one has a fan-out structure with copper link inside readout crates. To guarantee the precision of clock frequency or phase, the r-PTP (reduced Precision Time Protocol) and the DDMTD (digital Dual Mixer Time Difference) methods are used for frequency synthesis, phase measurement and adjustment, which is implemented by FPGA (Field Programmable Gate Array) in real-time. In addition, to synchronize global command execution, based upon this clock distribution network, synchronous signals are coded with clock for transmission. With technique of encoding/decoding and clock data recovery, signals such as global triggers or system control commands, can be distributed to all front-end channels synchronously, which greatly simplifies the system design. The experimental results show that both the clock jitter (RMS) and the clock skew can be less than 100 ps.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
NASA Astrophysics Data System (ADS)
Watson, James R.; Stock, Charles A.; Sarmiento, Jorge L.
2015-11-01
Modeling the dynamics of marine populations at a global scale - from phytoplankton to fish - is necessary if we are to quantify how climate change and other broad-scale anthropogenic actions affect the supply of marine-based food. Here, we estimate the abundance and distribution of fish biomass using a simple size-based food web model coupled to simulations of global ocean physics and biogeochemistry. We focus on the spatial distribution of biomass, identifying highly productive regions - shelf seas, western boundary currents and major upwelling zones. In the absence of fishing, we estimate the total ocean fish biomass to be ∼ 2.84 ×109 tonnes, similar to previous estimates. However, this value is sensitive to the choice of parameters, and further, allowing fish to move had a profound impact on the spatial distribution of fish biomass and the structure of marine communities. In particular, when movement is implemented the viable range of large predators is greatly increased, and stunted biomass spectra characterizing large ocean regions in simulations without movement, are replaced with expanded spectra that include large predators. These results highlight the importance of considering movement in global-scale ecological models.
NASA Astrophysics Data System (ADS)
Tasdighi, A.; Arabi, M.
2014-12-01
Calibration of physically-based distributed hydrologic models has always been a challenging task and subject of controversy in the literature. This study is aimed to investigate how different physiographic characteristics of watersheds call for adaption of the methods used in order to have more robust and internally justifiable simulations. Haw Watershed (1300 sq. mi.) is located in the piedmont region of North Carolina draining into B. Everett Jordan Lake located in west of Raleigh. Major land covers in this watershed are forest (50%), urban/suburban (21%) and agriculture (25%) of which a large portion is pasture. Different hydrologic behaviors are observed in this watershed based on the land use composition and size of the sub-watersheds. Highly urbanized sub-watersheds show flashier hydrographs and near instantaneous hydrologic responses. This is also the case with smaller sub-watersheds with relatively lower percentage of urban areas. The Soil and Water Assessment Tool (SWAT) has been widely used in the literature for hydrologic simulation on daily basis using Soil Conservation Service Curve Number method (SCS CN). However, it has not been used as frequently using the sub-daily routines. In this regard there are a number of studies in the literature which have used coarse time scale (daily) precipitation with methods like SCS CN to calibrate SWAT for watersheds containing different types of land uses and soils reporting satisfying results at the outlet of the watershed. This is while for physically-based distributed models, the more important concern should be to check and analyze the internal processes leading to those results. In this study, the watershed is divided into several sub-watersheds to compare the performance of SCS CN and Green & Ampt (GA) methods on different land uses at different spatial scales. The results suggest better performance of GA compared to SCS CN for smaller and highly urbanized sub-watersheds although GA predominance is not very significant for the latter. Also, the better performance of GA in simulating the peak flows and flashy behavior of the hydrographs is notable. GA did not show a significant improvement over SCS CN in simulating the excess rainfall for larger sub-watersheds.
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.
2011-12-01
Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major contributor to the total bias both during the summer and winter seasons. Missed precipitation, most likely light rain and rain over snow cover, has significant effect on soil moisture and are less capable of producing runoff that results runoff dependency on the hit bias only.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Linkage of a Physically Based Distributed Watershed Model and a Dynamic Plant Growth Model
2006-12-01
i.e., Universal Soil Loss Equation ( USLE ) factors, K, C, and P). The K, C, and P factors are empiri- cal coefficients with the same conceptual...with general ecosystem models designed to make long-term projections of ecosystem dynamics. This development effort investigated the linkage of soil ...20 EDYS soil module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raizer, Yu. P.
A new voluminous book on gas-discharge physics is reviewed. It is emphasized that the authors consistently follow a nontraditional approach based on the electron distribution function with allowance for its nonlocal character. This opens new, sometimes unexpected, issues of the well-known phenomena, which is illustrated by the reviewer by using the positive column of a low-pressure glow discharge as an example.
Eric Rowell; E. Louise Loudermilk; Carl Seielstad; Joseph O' Brien
2016-01-01
Understanding fine-scale variability in understory fuels is increasingly important as physics-based fire behavior modelsdrive needs for higher-resolution data. Describing fuelbeds 3Dly is critical in determining vertical and horizontal distributions offuel elements and the mass, especially in frequently burned pine ecosystems where fine-scale...
ERIC Educational Resources Information Center
Cangelosi, Angelo
2007-01-01
In this paper we present the "grounded adaptive agent" computational framework for studying the emergence of communication and language. This modeling framework is based on simulations of population of cognitive agents that evolve linguistic capabilities by interacting with their social and physical environment (internal and external symbol…
USSR Report, Physics and Mathematics.
1987-01-14
polarization distribution in these crystals at a temperature above the 70°C phase transition point corresponding to maximum dielectric permittivity ...are derived theoretically and matched with experimental data. The theory is based on the relation between complex dielectric permittivity and...Kramers-Heisenberg relation for polarizability. Both real and imaginary parts of dielectric permittivity are evaluated, assuming a valence band fully
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
Aerostructural interaction in a collaborative MDO environment
NASA Astrophysics Data System (ADS)
Ciampa, Pier Davide; Nagel, Björn
2014-10-01
The work presents an approach for aircraft design and optimization, developed to account for fluid-structure interactions in MDO applications. The approach makes use of a collaborative distributed design environment, and focuses on the influence of multiple physics based aerostructural models, on the overall aircraft synthesis and optimization. The approach is tested for the design of large transportation aircraft.
Classroom Facilities Planning Aids. March 1964. College and University Physical Facilities Series.
ERIC Educational Resources Information Center
Higgins, E. Eugene; And Others
Based on a survey of higher education institutions in the United States, academic classroom data are presented in tabular form. Tables 1 and 3 (for public and private institutions respectively) show state and regional distribution of classroom data by--(1) number of classrooms, (2) total assignable area, (3) classroom size, (4) number of student…
Practical application of noise diffusion in U-70 synchrotron
NASA Astrophysics Data System (ADS)
Ivanov, S. V.; Lebedev, O. P.
2016-12-01
This paper briefly outlines the physical substantiation and the engineering implementation of technological systems in the U-70 synchrotron based on controllable noise diffusion of the beam. They include two systems of stochastic slow beam extraction (for high and intermediate energy) and the system of longitudinal noise RF gymnastics designated for flattening the bunch distribution over the azimuth.
Cosmic Ray Measurements by Scintillators with Metal Resistor Semiconductor Avalanche Photo Diodes
ERIC Educational Resources Information Center
Blanco, Francesco; La Rocca, Paola; Riggi, Francesco; Akindinov, Alexandre; Mal'kevich, Dmitry
2008-01-01
An educational set-up for cosmic ray physics experiments is described. The detector is based on scintillator tiles with a readout through metal resistor semiconductor (MRS) avalanche photo diode (APD) arrays. Typical measurements of the cosmic angular distribution at sea level and a study of the East-West asymmetry obtained by such a device are…
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2015-08-01
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
A Self-Critique of Self-Organized Criticality in Astrophysics
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
The concept of ``self-organized criticality'' (SOC) was originally proposed as an explanation of 1/f-noise by Bak, Tang, and Wiesenfeld (1987), but turned out to have a far broader significance for scale-free nonlinear energy dissipation processes occurring in the entire universe. Over the last 30 years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into numerical SOC toy models. The novel applications stimulated also vigorous debates about the discrimination between SOC-related and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC models applied to astrophysical observations, attempt to describe what physics can be captured by SOC models, and offer a critique of weaknesses and strengths in existing SOC models.
A new data format for the commissioning phase of the ATLAS detector
NASA Astrophysics Data System (ADS)
Köneke, Karsten; ATLAS Collaboration
2010-04-01
In the commissioning phase of the ATLAS experiment, low-level Event Summary Data (ESD) are analyzed to evaluate the performance of the individual subdetectors, the performance of the reconstruction and particle identification algorithms, and to obtain calibration coefficients. In the grid model of distributed analysis, these data must be transferred to Tier-1 and Tier-2 sites before they can be analyzed. However, the large size of ESD (approxeq1 MByte/event) constrains the amount of data that can be distributed on the grid and is available on disks. In order to overcome this constraint and make the data fully available, new data sets — collectively known as Derived Physics Data (DPD) — have been designed. Each DPD set contains a subset of the ESD data, tailored to specific needs of the subdetector and object reconstruction and identification performance groups. Filtering algorithms perform a selection based on physics contents and trigger response, further reducing the data volume. Thanks to these techniques, the total volume of DPD to be distributed on the grid amounts to 20% of the initial ESD data. An evolution of the tools developed in this context serves to produce another set of DPDs that are specifically tailored for physics analysis. All selection criteria and other relevant information is stored inside these DPDs as meta-data and a connection to external databases is also established.
NASA Astrophysics Data System (ADS)
Liu, Weiqi; Huang, Peng; Peng, Jinye; Fan, Jianping; Zeng, Guihua
2018-02-01
For supporting practical quantum key distribution (QKD), it is critical to stabilize the physical parameters of signals, e.g., the intensity, phase, and polarization of the laser signals, so that such QKD systems can achieve better performance and practical security. In this paper, an approach is developed by integrating a support vector regression (SVR) model to optimize the performance and practical security of the QKD system. First, a SVR model is learned to precisely predict the time-along evolutions of the physical parameters of signals. Second, such predicted time-along evolutions are employed as feedback to control the QKD system for achieving the optimal performance and practical security. Finally, our proposed approach is exemplified by using the intensity evolution of laser light and a local oscillator pulse in the Gaussian modulated coherent state QKD system. Our experimental results have demonstrated three significant benefits of our SVR-based approach: (1) it can allow the QKD system to achieve optimal performance and practical security, (2) it does not require any additional resources and any real-time monitoring module to support automatic prediction of the time-along evolutions of the physical parameters of signals, and (3) it is applicable to any measurable physical parameter of signals in the practical QKD system.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been informed by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments. However, the predictive capabilities for new coals and processes are limited. This work presents a Discrete Element Method based computational framework to predict particle size distribution resulting from the breakage of coal particles characterized by the coal’s physical properties. The effect ofmore » certain operating parameters on the breakage behavior of coal particles also is examined.« less
Hydroacoustic basis for detection and characterization of eelgrass (Zostera marina)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabol, B.; McCarthy, E.; Rocha, K.
1997-06-01
Understanding the distribution and density of seagrasses is important for a variety of environmental applications. Physical techniques for detection and characterization are labor and cost intensive and provide little insight into spatial distribution. optical-based techniques are limited by water clarity - frequently resulting in systematic underestimation of the extent of seagrasses. Active hydroacoustic techniques have shown the ability to detect seagrasses but the phenomenology behind detection is poorly understood. Laboratory and in-situ hydroacoustic measurements are presented for eelgrass (Zostera marina), a common seagrass in the United States. Based on these data, hydroacoustic approaches for wide area detection and mapping aremore » discussed and several are demonstrated within areas of established eelgrass beds in Narragansett Bay, Rhode Island.« less
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
Sahan, Muhammet Ikbal; Verguts, Tom; Boehler, Carsten Nicolas; Pourtois, Gilles; Fias, Wim
2016-08-01
Selective attention is not limited to information that is physically present in the external world, but can also operate on mental representations in the internal world. However, it is not known whether the mechanisms of attentional selection operate in similar fashions in physical and mental space. We studied the spatial distributions of attention for items in physical and mental space by comparing how successfully distractors were rejected at varying distances from the attended location. The results indicated very similar distribution characteristics of spatial attention in physical and mental space. Specifically, we found that performance monotonically improved with increasing distractor distance relative to the attended location, suggesting that distractor confusability is particularly pronounced for nearby distractors, relative to distractors farther away. The present findings suggest that mental representations preserve their spatial configuration in working memory, and that similar mechanistic principles underlie selective attention in physical and in mental space.
NASA Astrophysics Data System (ADS)
Pino, Cristian; Herrera, Paulo; Therrien, René
2017-04-01
In many arid regions around the world groundwater recharge occurs during flash floods. This transient spatially and temporally concentrated flood-recharge process takes place through the variably saturated zone between surface and usually the deep groundwater table. These flood events are characterized by rapid and extreme changes in surface flow depth and velocity and soil moisture conditions. Infiltration rates change over time controlled by the hydraulic gradients and the unsaturated hydraulic conductivity at the surface-subsurface interface. Today is a challenge to assess the spatial and temporal distribution of groundwater recharge from flash flood events under real field conditions at different scales in arid areas. We apply an integrated surface-subsurface variably saturated physically-based flow model at the watershed scale to assess the recharge process during and after a flash flood event registered in an arid fluvial valley in Northern Chile. We are able to reproduce reasonably well observed groundwater levels and surface flow discharges during and after the flood with a calibrated model. We also investigate the magnitude and spatio-temporal distribution of recharge and the response of the system to variations of different surface and subsurface parameters, initial soil moisture content and groundwater table depths and surface flow conditions. We demonstrate how an integrated physically based model allows the exploration of different spatial and temporal system states, and that the analysis of the results of the simulations help us to improve our understanding of the recharge processes in similar type of systems that are common to many arid areas around the world.
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less
NASA Astrophysics Data System (ADS)
He, Xiao Dong
This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.
Coherent optical monolithic phased-array antenna steering system
Hietala, Vincent M.; Kravitz, Stanley H.; Vawter, Gregory A.
1994-01-01
An optical-based RF beam steering system for phased-array antennas comprising a photonic integrated circuit (PIC). The system is based on optical heterodyning employed to produce microwave phase shifting by a monolithic PIC constructed entirely of passive components. Microwave power and control signal distribution to the antenna is accomplished by optical fiber, permitting physical separation of the PIC and its control functions from the antenna. The system reduces size, weight, complexity, and cost of phased-array antenna systems.
A Quantum Proxy Signature Scheme Based on Genuine Five-qubit Entangled State
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Huang, Jun; Yu, Yao-Feng; Jiang, Xiu-Li
2014-09-01
In this paper a very efficient and secure proxy signature scheme is proposed. It is based on controlled quantum teleportation. Genuine five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. Quantum key distribution and one-time pad are adopted in our scheme, which could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.
Cyber-physical geographical information service-enabled control of diverse in-situ sensors.
Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya
2015-01-23
Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control.
Cyber-Physical Geographical Information Service-Enabled Control of Diverse In-Situ Sensors
Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya
2015-01-01
Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control. PMID:25625906
Non-equilibrium Transport in Carbon based Adsorbate Systems
NASA Astrophysics Data System (ADS)
Fürst, Joachim; Brandbyge, Mads; Stokbro, Kurt; Jauho, Antti-Pekka
2007-03-01
We have used the Atomistix Tool Kit(ATK) and TranSIESTA[1] packages to investigate adsorption of iron atoms on a graphene sheet. The technique of both codes is based on density functional theory using local basis sets[2], and non-equilibrium Green's functions (NEGF) to calculate the charge distribution under external bias. Spin dependent electronic structure calculations are performed for different iron coverages. These reveal adsorption site dependent charge transfer from iron to graphene leading to screening effects. Transport calculations show spin dependent scattering of the transmission which is analysed obtaining the transmission eigenchannels for each spin type. The phenomena of electromigration of iron in these systems at finite bias will be discussed, estimating the so-called wind force from the reflection[3]. [1] M. Brandbyge, J.-L. Mozos, P. Ordejon, J. Taylor, and K. Stokbro. Physical Review B (Condensed Matter and Materials Physics), 65(16):165401/11-7, 2002. [2] Jose M. Soler, Emilio Artacho, Julian D. Gale, Alberto Garcia, Javier Junquera, Pablo Ordejon, and Daniel Sanchez-Portal. Journal of Physics Condensed Matter, 14(11):2745-2779, 2002. [3] Sorbello. Theory of electromigration. Solid State Physics, 1997.
Gaikwad, Ravi M.; Dokukin, Maxim E.; Iyer, K. Swaminathan; Woodworth, Craig D.; Volkov, Dmytro O.; Sokolov, Igor
2012-01-01
Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical interaction between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. PMID:21305062
An enhanced lumped element electrical model of a double barrier memristive device
NASA Astrophysics Data System (ADS)
Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz
2017-05-01
The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.
NASA Astrophysics Data System (ADS)
Li, Xiongyan; Qin, Ruibao; Gao, Yunfeng; Fan, Hongjun
2017-03-01
In the marine sandstone reservoirs of the M oilfield the water cut is up to 98%, while the recovery factor is only 35%. Additionally, the distribution of the remaining oil is very scattered. In order to effectively assess the potential of the remaining oil, the logging evaluation of the water-flooded layers and the distribution rule of the remaining oil are studied. Based on the log response characteristics, the water-flooded layers can be qualitatively identified. On the basis of the mercury injection experimental data of the evaluation wells, the calculation model of the initial oil saturation is built. Based on conventional logging data, the evaluation model of oil saturation is established. The difference between the initial oil saturation and the residual oil saturation can be used to quantitatively evaluate the water-flooded layers. The evaluation result of the water-flooded layers is combined with the ratio of the water-flooded wells in the marine sandstone reservoirs. As a result, the degree of water flooding in the marine sandstone reservoirs can be assessed. On the basis of structural characteristics and sedimentary environments, the horizontal and vertical water-flooding rules of the different types of reservoirs are elaborated upon, and the distribution rule of the remaining oil is disclosed. The remaining oil is mainly distributed in the high parts of the structure. The remaining oil exists in the top of the reservoirs with good physical properties while the thickness of the remaining oil ranges from 2-5 m. However, the thickness of the remaining oil of the reservoirs with poor physical properties ranges from 5-8 m. The high production of some of the drilled horizontal wells shows that the above distribution rule of the remaining oil is accurate. In the marine sandstone reservoirs of the M oilfield, the research on the well logging evaluation of the water-flooded layers and the distribution rule of the remaining oil has great practical significance to the prediction of the distribution of the remaining oil and the optimization of well locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
Device-based monitoring in physical activity and public health research.
Bassett, David R
2012-11-01
Measurement of physical activity is important, given the vital role of this behavior in physical and mental health. Over the past quarter of a century, the use of small, non-invasive, wearable monitors to assess physical activity has become commonplace. This review is divided into three sections. In the first section, a brief history of physical activity monitoring is provided, along with a discussion of the strengths and weaknesses of different devices. In the second section, recent applications of physical activity monitoring in physical activity and public health research are discussed. Wearable monitors are being used to conduct surveillance, and to determine the extent and distribution of physical activity and sedentary behaviors in populations around the world. They have been used to help clarify the dose-response relation between physical activity and health. Wearable monitors that provide feedback to users have also been used in longitudinal interventions to motivate research participants and to assess their compliance with program goals. In the third section, future directions for research in physical activity monitoring are discussed. It is likely that new developments in wearable monitors will lead to greater accuracy and improved ease-of-use.
A non extensive statistical physics analysis of the Hellenic subduction zone seismicity
NASA Astrophysics Data System (ADS)
Vallianatos, F.; Papadakis, G.; Michas, G.; Sammonds, P.
2012-04-01
The Hellenic subduction zone is the most seismically active region in Europe [Becker & Meier, 2010]. The spatial and temporal distribution of seismicity as well as the analysis of the magnitude distribution of earthquakes concerning the Hellenic subduction zone, has been studied using the concept of Non-Extensive Statistical Physics (NESP) [Tsallis, 1988 ; Tsallis, 2009]. Non-Extensive Statistical Physics, which is a generalization of Boltzmann-Gibbs statistical physics, seems a suitable framework for studying complex systems (Vallianatos, 2011). Using this concept, Abe & Suzuki (2003;2005) investigated the spatial and temporal properties of the seismicity in California and Japan and recently Darooneh & Dadashinia (2008) in Iran. Furthermore, Telesca (2011) calculated the thermodynamic parameter q of the magnitude distribution of earthquakes of the southern California earthquake catalogue. Using the external seismic zones of 36 seismic sources of shallow earthquakes in the Aegean and the surrounding area [Papazachos, 1990], we formed a dataset concerning the seismicity of shallow earthquakes (focal depth ≤ 60km) of the subduction zone, which is based on the instrumental data of the Geodynamic Institute of the National Observatory of Athens (http://www.gein.noa.gr/, period 1990-2011). The catalogue consists of 12800 seismic events which correspond to 15 polygons of the aforementioned external seismic zones. These polygons define the subduction zone, as they are associated with the compressional stress field which characterizes a subducting regime. For each event, moment magnitude was calculated from ML according to the suggestions of Papazachos et al. (1997). The cumulative distribution functions of the inter-event times and the inter-event distances as well as the magnitude distribution for each seismic zone have been estimated, presenting a variation in the q-triplet along the Hellenic subduction zone. The models used, fit rather well to the observed distributions, implying the complexity of the spatiotemporal properties of seismicity and the usefulness of NESP in investigating such phenomena, exhibiting scale-free nature and long range memory effects. Acknowledgments. This work was supported in part by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project entitled "Integrated understanding of Seismicity, using innovative Methodologies of Fracture mechanics along with Earthquake and non extensive statistical physics - Application to the geodynamic system of the Hellenic Arc. SEISMO FEAR HELLARC". GM and GP wish to acknowledge the partial support of the Greek State Scholarships Foundation (ΙΚΥ).
Modeling of luminance distribution in CAVE-type virtual reality systems
NASA Astrophysics Data System (ADS)
Meironke, Michał; Mazikowski, Adam
2017-08-01
At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.
Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack
2016-01-01
For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to ‘internal photons’ inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350–700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation. PMID:26950936
Network-based Modeling of Mesoscale Catchments - The Hydrology Perspective of Glowa-danube
NASA Astrophysics Data System (ADS)
Ludwig, R.; Escher-Vetter, H.; Hennicker, R.; Mauser, W.; Niemeyer, S.; Reichstein, M.; Tenhunen, J.
Within the GLOWA initiative of the German Ministry for Research and Educa- tion (BMBF), the project GLOWA-Danube is funded to establish a transdisciplinary network-based decision support tool for water related issues in the Upper Danube wa- tershed. It aims to develop and validate integration techniques, integrated models and integrated monitoring procedures and to implement them in the network-based De- cision Support System DANUBIA. An accurate description of processes involved in energy, water and matter fluxes and turnovers requires an intense collaboration and exchange of water related expertise of different scientific disciplines. DANUBIA is conceived as a distributed expert network and is developed on the basis of re-useable, refineable, and documented sub-models. In order to synthesize a common understand- ing between the project partners, a standardized notation of parameters and functions and a platform-independent structure of computational methods and interfaces has been established using the Unified Modeling Language UML. DANUBIA is object- oriented, spatially distributed and raster-based at its core. It applies the concept of "proxels" (Process Pixel) as its basic object, which has different dimensions depend- ing on the viewing scale and connects to its environment through fluxes. The presented study excerpts the hydrological view point of GLOWA-Danube, its approach of model coupling and network based communication (using the Remote Method Invocation RMI), the object-oriented technology to simulate physical processes and interactions at the land surface and the methodology to treat the issue of spatial and temporal scal- ing in large, heterogeneous catchments. The mechanisms applied to communicate data and model parameters across the typical discipline borders will be demonstrated from the perspective of a land-surface object, which comprises the capabilities of interde- pendent expert models for snowmelt, soil water movement, runoff formation, plant growth and radiation balance in a distributed JAVA-based modeling environment. The coupling to the adjacent physical objects of atmosphere, groundwater and river net- work will also be addressed.
CLINICAL APPLICATIONS OF CRYOTHERAPY AMONG SPORTS PHYSICAL THERAPISTS.
Hawkins, Shawn W; Hawkins, Jeremy R
2016-02-01
Therapeutic modalities (TM) are used by sports physical therapists (SPT) but how they are used is unknown. To identify the current clinical use patterns for cryotherapy among SPT. Cross-sectional survey. All members (7283) of the Sports Physical Therapy Section of the APTA were recruited. A scenario-based survey using pre-participation management of an acute or sub-acute ankle sprain was developed. A Select Survey link was distributed via email to participants. Respondents selected a treatment approach based upon options provided. Follow-up questions were asked. The survey was available for two weeks with a follow-up email sent after one week. Question answers were the main outcome measures. Reliability: Cronbach's alpha=>0.9. The SPT response rate = 6.9% (503); responses came from 48 states. Survey results indicated great variability in respondents' approaches to the treatment of an acute and sub-acute ankle sprain. SPT applied cryotherapy with great variability and not always in accordance to the limited research on the TM. Continuing education, application of current research, and additional outcomes based research needs to remain a focus for clinicians. 3.
Modelling the Interior Structure of Enceladus Based on the 2014's Cassini Gravity Data.
Taubner, R-S; Leitner, J J; Firneis, M G; Hitzenberger, R
2016-06-01
We present a model for the internal structure of Saturn's moon Enceladus. This model allows us to estimate the physical conditions at the bottom of the satellite's potential subsurface water reservoir and to determine the radial distribution of pressure and gravity. This leads to a better understanding of the physical and chemical conditions at the water/rock boundary. This boundary is the most promising area on icy moons for astrobiological studies as it could serve as a potential habitat for extraterrestrial life similar to terrestrial microbes that inhabit rocky mounds on Earth's sea floors.
A quantum proxy group signature scheme based on an entangled five-qubit state
NASA Astrophysics Data System (ADS)
Wang, Meiling; Ma, Wenping; Wang, Lili; Yin, Xunru
2015-09-01
A quantum proxy group signature (QPGS) scheme based on controlled teleportation is presented, by using the entangled five-qubit quantum state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. The security of the scheme is guaranteed by the entanglement correlations of the entangled five-qubit state, the secret keys based on the quantum key distribution (QKD) and the one-time pad algorithm, all of which have been proven to be unconditionally secure and the signature anonymity.
A Novel Quantum Solution to Privacy-Preserving Nearest Neighbor Query in Location-Based Services
NASA Astrophysics Data System (ADS)
Luo, Zhen-yu; Shi, Run-hua; Xu, Min; Zhang, Shun
2018-04-01
We present a cheating-sensitive quantum protocol for Privacy-Preserving Nearest Neighbor Query based on Oblivious Quantum Key Distribution and Quantum Encryption. Compared with the classical related protocols, our proposed protocol has higher security, because the security of our protocol is based on basic physical principles of quantum mechanics, instead of difficulty assumptions. Especially, our protocol takes single photons as quantum resources and only needs to perform single-photon projective measurement. Therefore, it is feasible to implement this protocol with the present technologies.
NASA Astrophysics Data System (ADS)
Nazarov, Anton
2012-11-01
In this paper we present Affine.m-a program for computations in representation theory of finite-dimensional and affine Lie algebras and describe implemented algorithms. The algorithms are based on the properties of weights and Weyl symmetry. Computation of weight multiplicities in irreducible and Verma modules, branching of representations and tensor product decomposition are the most important problems for us. These problems have numerous applications in physics and we provide some examples of these applications. The program is implemented in the popular computer algebra system Mathematica and works with finite-dimensional and affine Lie algebras. Catalogue identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENB_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 844 No. of bytes in distributed program, including test data, etc.: 1 045 908 Distribution format: tar.gz Programming language: Mathematica. Computer: i386-i686, x86_64. Operating system: Linux, Windows, Mac OS, Solaris. RAM: 5-500 Mb Classification: 4.2, 5. Nature of problem: Representation theory of finite-dimensional Lie algebras has many applications in different branches of physics, including elementary particle physics, molecular physics, nuclear physics. Representations of affine Lie algebras appear in string theories and two-dimensional conformal field theory used for the description of critical phenomena in two-dimensional systems. Also Lie symmetries play a major role in a study of quantum integrable systems. Solution method: We work with weights and roots of finite-dimensional and affine Lie algebras and use Weyl symmetry extensively. Central problems which are the computations of weight multiplicities, branching and fusion coefficients are solved using one general recurrent algorithm based on generalization of Weyl character formula. We also offer alternative implementation based on the Freudenthal multiplicity formula which can be faster in some cases. Restrictions: Computational complexity grows fast with the rank of an algebra, so computations for algebras of ranks greater than 8 are not practical. Unusual features: We offer the possibility of using a traditional mathematical notation for the objects in representation theory of Lie algebras in computations if Affine.m is used in the Mathematica notebook interface. Running time: From seconds to days depending on the rank of the algebra and the complexity of the representation.
Church, Timothy S
2016-11-01
The analysis plan and article in this issue of the Journal by Evenson et al. (Am J Epidemiol 2016;184(9):621-632) is well-conceived, thoughtfully conducted, and tightly written. The authors utilized the National Health and Nutrition Examination Survey data set to examine the association between accelerometer-measured physical activity level and mortality and found that meeting the 2013 federal Physical Activity Guidelines resulted in a 35% reduction in risk of mortality. The timing of these findings could not be better, given the ubiquitous nature of personal accelerometer devices. The masses are already equipped to routinely quantify their activity, and now we have the opportunity and responsibility to provide evidenced-based, tailored physical activity goals. We have evidenced-based physical activity guidelines, mass distribution of devices to track activity, and now scientific support indicating that meeting the physical activity goal, as assessed by these devices, has substantial health benefits. All of the pieces are in place to make physical inactivity a national priority, and we now have the opportunity to positively affect the health of millions of Americans. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR... DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads... for making and distributing phonorecords, including by means of digital phonorecord deliveries, in...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR... DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads... for making and distributing phonorecords, including by means of digital phonorecord deliveries, in...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR... DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads... for making and distributing phonorecords, including by means of digital phonorecord deliveries, in...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR... DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads... for making and distributing phonorecords, including by means of digital phonorecord deliveries, in...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR... DISTRIBUTING OF PHYSICAL AND DIGITAL PHONORECORDS Physical Phonorecord Deliveries, Permanent Digital Downloads... for making and distributing phonorecords, including by means of digital phonorecord deliveries, in...
Bayesian ionospheric multi-instrument 3D tomography
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Vierinen, Juha; Roininen, Lassi
2017-04-01
The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.
A hybrid deep neural network and physically based distributed model for river stage prediction
NASA Astrophysics Data System (ADS)
hitokoto, Masayuki; sakuraba, Masaaki
2016-04-01
We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.
Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun
2014-01-01
In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.
Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bratton, Robert L.
2011-01-01
The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.
NASA Astrophysics Data System (ADS)
Meli, Kalliopi; Lavidas, Konstantinos; Koliopoulos, Dimitrios
2018-04-01
Low enrolment in undergraduate level physics programmes has drawn the attention of the relevant disciplines, education policy-makers, and researchers worldwide. Many reports released during the previous decades attempt to identify the factors that attract young people to study science, but only few of them focus explicitly on physics. In Greece, in contrast to many other countries, physics departments are overflowing with young students. However, there are two categories of students: those for whom physics was the optimal choice of a programme ("choosers") and those for whom physics was an alternative choice that they had to settle for. We suggest that the latter category be called "nearly-choosers," in order to be differentiated from choosers as well as from "non-choosers," namely those candidates that did not apply to a physics programme at all. We are interested in the factors that attract high school students to study physics and the differences (if any) between choosers and nearly-choosers. A newly formed questionnaire was distributed within a Greek physics department (University of Patras), and the students' responses (n = 105) were analysed with exploratory factor analysis and specifically principal component analysis so as to extract broad factors. Three broad factors have arisen: school-based, career, and informal learning. The first two factors proved to be motivating for pursuing a degree in physics, while the third factor appeared to have a rather indifferent association. t tests and Pearson correlations indicated mild differentiations between choosers and nearly-choosers that pertain to school-based influences and informal learning.
First moments of nucleon generalized parton distributions
Wang, P.; Thomas, A. W.
2010-06-01
We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.
Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.
Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo
2018-05-10
The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.
Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model
NASA Astrophysics Data System (ADS)
Anderson, K. R.
2016-12-01
Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.
pyCTQW: A continuous-time quantum walk simulator on distributed memory computers
NASA Astrophysics Data System (ADS)
Izaac, Josh A.; Wang, Jingbo B.
2015-01-01
In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.
Spectroscopic observations of the extended corona during the SOHO whole sun month
NASA Technical Reports Server (NTRS)
Strachan, L.; Raymond, J. C.; Panasyuk, A. V.; Fineschi, S.; Gardner, L. D.; Antonucci, E.; Giordano, S.; Romoli, M.; Noci, G.; Kohl, J. L.
1997-01-01
The spatial distribution of plasma parameters in the extended corona, derived from the ultraviolet coronagraph spectrometer (UVCS) onboard the Solar and Heliospheric Observatory (SOHO), was investigated. The observations were carried out during the SOHO whole month campaign. Daily coronal scans in the H I Lyman alpha and O VI lambda-lambda 1032 A and 1037 A were used. Maps of outflow velocities of O(5+), based on Doppler dimming of the O VI lines, are discussed. The velocity distribution widths of O(5+) are shown to be a clear signature of coronal holes while the velocity distributions for H(0) show a much smaller effect. The possible physical explanations for some of the observed features are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian; Malard, Joel M.
2004-08-01
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust controlmore » strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.« less
A charge-based model of Junction Barrier Schottky rectifiers
NASA Astrophysics Data System (ADS)
Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali
2018-06-01
A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.
Füzéki, Eszter; Vogt, Lutz; Banzer, Winfried
2017-03-01
National physical activity recommendations are regarded as crucial elements of comprehensive physical activity promotion strategies. To date, Germany has no such national physical activity recommendations. The aim of this study was to provide physical activity recommendations based on a comprehensive summary of scientific evidence on the relationships between physical activity and a range of health outcomes in adults and older adults. The recommendations were developed in a 3-phase process (systematic literature review, development and use of quality criteria, synthesis of content) based on already existing high-quality guidelines. Based on the analysis of documents included in this study, the following recommendations were formulated. To gain wide-ranging health benefits, adults and older adults should be physically active regularly and avoid inactivity. Adults and older adults should carry out at least 150 min/week moderate intensity or 75 min/week high intensity aerobic activity. Adults and older adults can also reach the recommended amount of physical activity by performing activities in an appropriate combination in both intensity ranges. Optimally, physical activity should be distributed over the week and it can be accumulated in bouts of at least 10 min. Physical activity beyond 150 min/week yields further health benefits. At the same time, physical activity below 150 min/week is associated with meaningful health gains. Accordingly, all adults and older adults should be encouraged to be physically active whenever possible. Adults and older adults should also perform muscle strengthening activities at least twice a week. Regular balance exercises (3 times a week) can reduce the risk of falls in older adults. Adults and older adults should avoid long periods of sitting and should break up sitting time by physical activity. Physical activity can lead to adverse events, such as musculoskeletal injuries, which can be mitigated through appropriate measures. All in all, the benefits of regular physical activity overweigh by far the risks in both adults and older adults. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Liu, Gang; Zhao, Rong; Liu, Jiping; Zhang, Qingpu
2007-06-01
The Lancang River Basin is so narrow and its hydrological and meteorological information are so flexible. The Rainfall, evaporation, glacial melt water and groundwater affect the runoff whose replenishment forms changing notable with the season in different areas at the basin. Characters of different kind of distributed model and conceptual hydrological model are analyzed. A semi-distributed hydrological model of relation between monthly runoff and rainfall, temperate and soil type has been built in Changdu County based on Visual Basic and ArcObject. The way of discretization of distributed hydrological model was used in the model, and principles of conceptual model are taken into account. The sub-catchment of Changdu is divided into regular cells, and all kinds of hydrological and meteorological information and land use classes and slope extracted from 1:250000 digital elevation models are distributed in each cell. The model does not think of the rainfall-runoff hydro-physical process but use the conceptual model to simulate the whole contributes to the runoff of the area. The affection of evapotranspiration loss and underground water is taken into account at the same time. The spatial distribute characteristics of the monthly runoff in the area are simulated and analyzed with a few parameters.
Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs
NASA Astrophysics Data System (ADS)
Zhu, Yanfeng; Niu, Zhisheng
Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.
Fractal analysis of the spatial distribution of earthquakes along the Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter
2014-05-01
The Hellenic Subduction Zone (HSZ) is the most seismically active region in Europe. Many destructive earthquakes have taken place along the HSZ in the past. The evolution of such active regions is expressed through seismicity and is characterized by complex phenomenology. The understanding of the tectonic evolution process and the physical state of subducting regimes is crucial in earthquake prediction. In recent years, there is a growing interest concerning an approach to seismicity based on the science of complex systems (Papadakis et al., 2013; Vallianatos et al., 2012). In this study we calculate the fractal dimension of the spatial distribution of earthquakes along the HSZ and we aim to understand the significance of the obtained values to the tectonic and geodynamic evolution of this area. We use the external seismic sources provided by Papaioannou and Papazachos (2000) to create a dataset regarding the subduction zone. According to the aforementioned authors, we define five seismic zones. Then, we structure an earthquake dataset which is based on the updated and extended earthquake catalogue for Greece and the adjacent areas by Makropoulos et al. (2012), covering the period 1976-2009. The fractal dimension of the spatial distribution of earthquakes is calculated for each seismic zone and for the HSZ as a unified system using the box-counting method (Turcotte, 1997; Robertson et al., 1995; Caneva and Smirnov, 2004). Moreover, the variation of the fractal dimension is demonstrated in different time windows. These spatiotemporal variations could be used as an additional index to inform us about the physical state of each seismic zone. As a precursor in earthquake forecasting, the use of the fractal dimension appears to be a very interesting future work. Acknowledgements Giorgos Papadakis wish to acknowledge the Greek State Scholarships Foundation (IKY). References Caneva, A., Smirnov, V., 2004. Using the fractal dimension of earthquake distributions and the slope of the recurrence curve to forecast earthquakes in Colombia. Earth Sci. Res. J., 8, 3-9. Makropoulos, K., Kaviris, G., Kouskouna, V., 2012. An updated and extended earthquake catalogue for Greece and adjacent areas since 1900. Nat. Hazards Earth Syst. Sci., 12, 1425-1430. Papadakis, G., Vallianatos, F., Sammonds, P., 2013. Evidence of non extensive statistical physics behavior of the Hellenic Subduction Zone seismicity. Tectonophysics, 608, 1037-1048. Papaioannou, C.A., Papazachos, B.C., 2000. Time-independent and time-dependent seismic hazard in Greece based on seismogenic sources. Bull. Seismol. Soc. Am., 90, 22-33. Robertson, M.C., Sammis, C.G., Sahimi, M., Martin, A.J., 1995. Fractal analysis of three-dimensional spatial distributions of earthquakes with a percolation interpretation. J. Geophys. Res., 100, 609-620. Turcotte, D.L., 1997. Fractals and chaos in geology and geophysics. Second Edition, Cambridge University Press. Vallianatos, F., Michas, G., Papadakis, G., Sammonds, P., 2012. A non-extensive statistical physics view to the spatiotemporal properties of the June 1995, Aigion earthquake (M6.2) aftershock sequence (West Corinth rift, Greece). Acta Geophys., 60, 758-768.
NASA Astrophysics Data System (ADS)
Hapca, Simona
2015-04-01
Many soil properties and functions emerge from interactions of physical, chemical and biological processes at microscopic scales, which can be understood only by integrating techniques that traditionally are developed within separate disciplines. While recent advances in imaging techniques, such as X-ray computed tomography (X-ray CT), offer the possibility to reconstruct the 3D physical structure at fine resolutions, for the distribution of chemicals in soil, existing methods, based on scanning electron microscope (SEM) and energy dispersive X-ray detection (EDX), allow for characterization of the chemical composition only on 2D surfaces. At present, direct 3D measurement techniques are still lacking, sequential sectioning of soils, followed by 2D mapping of chemical elements and interpolation to 3D, being an alternative which is explored in this study. Specifically, we develop an integrated experimental and theoretical framework which combines 3D X-ray CT imaging technique with 2D SEM-EDX and use spatial statistics methods to map the chemical composition of soil in 3D. The procedure involves three stages 1) scanning a resin impregnated soil cube by X-ray CT, followed by precision cutting to produce parallel thin slices, the surfaces of which are scanned by SEM-EDX, 2) alignment of the 2D chemical maps within the internal 3D structure of the soil cube, and 3) development, of spatial statistics methods to predict the chemical composition of 3D soil based on the observed 2D chemical and 3D physical data. Specifically, three statistical models consisting of a regression tree, a regression tree kriging and cokriging model were used to predict the 3D spatial distribution of carbon, silicon, iron and oxygen in soil, these chemical elements showing a good spatial agreement between the X-ray grayscale intensities and the corresponding 2D SEM-EDX data. Due to the spatial correlation between the physical and chemical data, the regression-tree model showed a great potential in predicting chemical composition in particular for iron, which is generally sparsely distributed in soil. For carbon, silicon and oxygen, which are more densely distributed, the additional kriging of the regression tree residuals improved significantly the prediction, whereas prediction based on co-kriging was less consistent across replicates, underperforming regression-tree kriging. The present study shows a great potential in integrating geo-statistical methods with imaging techniques to unveil the 3D chemical structure of soil at very fine scales, the framework being suitable to be further applied to other types of imaging data such as images of biological thin sections for characterization of microbial distribution. Key words: X-ray CT, SEM-EDX, segmentation techniques, spatial correlation, 3D soil images, 2D chemical maps.
2017-05-23
YYYY) 24-05-2017 2. REPORT TYPE Final 3. DATES COVERED (From - To) 01 Dec 2015 to 30 Nov 2016 4. TITLE AND SUBTITLE Investigation on the Physics ... physics and microfluidics of the decomposition of H2O2 in MEMS µ-thrusters Funding Institution: USAF AFOSR EOARD Grant Number: FA9550-16-1-0081...PVD Physical Vapor Deposition UniBO University of Bologna DISTRIBUTION A. Approved for public release: distribution unlimited. Final Report Version
Time-frequency analysis of acoustic scattering from elastic objects
NASA Astrophysics Data System (ADS)
Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.
1990-06-01
A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.
A statistical physics perspective on alignment-independent protein sequence comparison.
Chattopadhyay, Amit K; Nasiev, Diar; Flower, Darren R
2015-08-01
Within bioinformatics, the textual alignment of amino acid sequences has long dominated the determination of similarity between proteins, with all that implies for shared structure, function and evolutionary descent. Despite the relative success of modern-day sequence alignment algorithms, so-called alignment-free approaches offer a complementary means of determining and expressing similarity, with potential benefits in certain key applications, such as regression analysis of protein structure-function studies, where alignment-base similarity has performed poorly. Here, we offer a fresh, statistical physics-based perspective focusing on the question of alignment-free comparison, in the process adapting results from 'first passage probability distribution' to summarize statistics of ensemble averaged amino acid propensity values. In this article, we introduce and elaborate this approach. © The Author 2015. Published by Oxford University Press.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
Processor-Based Strong Physical Unclonable Functions with Aging-Based Response Tuning (Preprint)
2013-01-01
NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON GARRET S. ROSE a. REPORT U b . ABSTRACT U c. THIS PAGE U 19b. TELEPHONE NUMBER (Include area code...generated by quad-tree process variation model [1]. The number in the right side of the figures means Z value of Gaussian distribution. B . Delay model To...and B are technology dependent constants. As shown in Equation 2, the Vth shift heavily depends on temperature (T ) and stress time (t). By applying
The kinetics and dynamics of the coma of Halley's comet
NASA Technical Reports Server (NTRS)
Combi, Michael R.
1994-01-01
This grant to the University of Michigan supported the efforts of Michael R. Combi to serve as a co-investigator in collaboration with a larger effort by the principal investigator, William Smyth of Atmospheric and Environmental Research, Inc. The overall objective of this project was to analyze in a self-consistent manner unique optical O((sup 1)D) and NH2 ultra-high resolution line profile data of excellent quality and other supporting lower-resolution spectral data for the coma of comet P/Halley by using highly developed and physically-based cometary coma models in order to determine and explain in terms of physical processes the actual dynamics and photochemical kinetics that occur in the coma. The justification for this work is that it provides a valuable and underlying physical base from which to interpret significantly different types of coma observations in a self-consistent manner and hence bring into agreement (or avoid) apparent inconsistencies that arise from non-physically based interpretations. The level of effort for the Michigan component amounted to less than three person-months over a planned period of three years. The period had been extended at no extra cost to four years because the Michigan grant and the AER contract did not have coincident time periods. An effort of somewhat larger scope was undertaken by the PI. The importance of the O((sup 1)D) profiles is that they provide a direct trace of the water distribution in comets. The line profile shape is produced by the convolution of the outflow velocity and thermal dispersion of the parent water molecules with the photokinetic ejection of the oxygen atoms upon photodissociation of the parent water molecules. Our understanding of the NH2 and its precursor ammonia are important for comet-to-comet composition variations as they relate to the cosmo-chemistry of the early solar nebula. Modeling of the distribution of NH2 is necessary in order to infer the ammonia production rates from NH2 observations.
Atomic- and Device-Scale Physics of Ion-Transport Memristors
2017-02-02
ASSIGNED DISTRIBUTION STATEMENT. //SIGNED// //SIGNED// ARTHUR EDWARDS DAVID CARDIMONA Program Manager Technical Advisor, Space Based Advanced...in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s approval or disapproval...is available to the general public, including foreign nationals. Copies may be obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elders, W.A.; Williams, A.E.; Hoagland, J..
1981-01-01
Studies of cuttings and cores at Cerro Prieto have now been extended to more than 50 boreholes. The aims of this petrological and isotopic work are to determine the shape of the reservoir, its physical properties, and its temperature distribution and flow regime before the steam field was produced.
Physical characteristics of the Bahia Blanca estuary (Argentina)
NASA Astrophysics Data System (ADS)
Piccolo, Maria Cintia; Perillo, Gerardo M. E.
1990-09-01
Based on temperature, salinity and current velocity and direction data, the physical characteristics of the Bahia Blanca estuary are described. Data were gathered in vertical profiles made in longitudinal as well as in hourly surveys. Freshwater runoff averages 2 m 3 s -1; however, peak floods may reach 10-50 m 3 s -1. The temperature distribution is quite homogeneous in the estuary. Based on the salinity distribution, the estuary can be divided into two sectors: an inner one showing partially mixed characteristics with a strong tendency to become sectionally homogeneous during runoff conditions similar to the historical averages, and an outer sector which is sectionally homogeneous. Salinity values in the inner sector may be larger than those observed in the inner continental shelf. This is initiated by the restricted circulation in the inner estuary and added to by the tidal washing of back-estuary salt flats and by evaporation processes. Analysis of the residual circulation shows a marked difference in the direction of mass transport. In the deeper regions of the sections (northern flank) the flow reverses with depth, being headward near the bottom. However, net transport is landward in the shallower parts.
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H
2014-01-01
To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
NASA Astrophysics Data System (ADS)
Alvarez-Garreton, C. D.; Mendoza, P. A.; Zambrano-Bigiarini, M.; Galleguillos, M. H.; Boisier, J. P.; Lara, A.; Cortés, G.; Garreaud, R.; McPhee, J. P.; Addor, N.; Puelma, C.
2017-12-01
We provide the first catchment-based hydrometeorological, vegetation and physical data set over 531 catchments in Chile (17.8 S - 55.0 S). We compiled publicly available streamflow records at daily time steps for the period 1980-2015, and generated basin-averaged time series of the following hydrometeorological variables: 1) daily precipitation coming from three different gridded sources (re-analysis and satellite-based); 2) daily maximum and minimum temperature; 3) 8-days potential evapotranspiration (PET) based on MODIS imagery and daily PET based on Hargreaves formula; and 4) daily snow water equivalent. Additionally, catchments are characterized by their main physical (area, mean elevation, mean slope) and land cover characteristics. We synthetized these datasets with several indices characterizing the spatial distribution of climatic, hydrological, topographic and vegetation attributes. The new catchment-based dataset is unprecedented in the region and provides information that can be used in a myriad of applications, including catchment classification and regionalization studies, impacts of different land cover types on catchment response, characterization of drought history and projections, climate change impacts on hydrological processes, etc. Derived practical applications include water management and allocation strategies, decision making and adaptation planning to climate change. This data set will be publicly available and we encourage the community to use it.
NASA Astrophysics Data System (ADS)
Saber, M.; Sefelnasr, A.; Yilmaz, K. K.
2015-12-01
Flash flood is a natural hydrological phenomenon which affects many regions of the world. The behavior and effect of this phenomenon is different from one region to the other regions depending on several issues such as climatology and hydrological and topographical conditions at the target regions. Wadi assiut, Egypt as arid environment, and Gumara catchment, Lake Tana, Ethiopia, as humid conditions have been selected for application. The main target of this work is to simulate flash floods at both catchments considering the difference between them on the flash flood behaviors based on the variability of both of them. In order to simulate the flash floods, remote sensing data and a physical-based distributed hydrological model, Hydro-BEAM-WaS (Hydrological River Basin Environmental Assessment Model incorporating Wadi System) have been integrated used in this work. Based on the simulation results of flash floods in these regions, it was found that the time to reach the maximum peak is very short and consequently the warning time is very short as well. It was found that the flash floods starts from zero flow in arid environment, but on the contrary in humid arid, it starts from Base flow which is changeable based on the simulated events. Distribution maps of flash floods showing the vulnerable regions of these selected areas have been developed. Consequently, some mitigation strategies relying on this study have been introduced. The proposed methodology can be applied effectively for flash flood forecasting at different climate regions, however the paucity of observational data.
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.
NASA Astrophysics Data System (ADS)
Hautmann, F.; Jung, H.; Krämer, M.; Mulders, P. J.; Nocera, E. R.; Rogers, T. C.; Signori, A.
2014-12-01
Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library , a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.
Hautmann, F; Jung, H; Krämer, M; Mulders, P J; Nocera, E R; Rogers, T C; Signori, A
Transverse-momentum-dependent distributions (TMDs) are extensions of collinear parton distributions and are important in high-energy physics from both theoretical and phenomenological points of view. In this manual we introduce the library [Formula: see text], a tool to collect transverse-momentum-dependent parton distribution functions (TMD PDFs) and fragmentation functions (TMD FFs) together with an online plotting tool, TMDplotter. We provide a description of the program components and of the different physical frameworks the user can access via the available parameterisations.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
NASA Astrophysics Data System (ADS)
Yang, Xin; Zhong, Shiquan; Sun, Han; Tan, Zongkun; Li, Zheng; Ding, Meihua
Based on analyzing of the physical characteristics of cloud and importance of cloud in agricultural production and national economy, cloud is a very important climatic resources such as temperature, precipitation and solar radiation. Cloud plays a very important role in agricultural climate division .This paper analyzes methods of cloud detection based on MODIS data in China and Abroad . The results suggest that Quanjun He method is suitable to detect cloud in Guangxi. State chart of cloud cover in Guangxi is imaged by using Quanjun He method .We find out the approach of calculating cloud covered rate by using the frequency spectrum analysis. At last, the Guangxi is obtained. Taking Rongxian County Guangxi as an example, this article analyze the preliminary application of cloud covered rate in distribution of Rong Shaddock pomelo . Analysis results indicate that cloud covered rate is closely related to quality of Rong Shaddock pomelo.
Chip-based quantum key distribution
NASA Astrophysics Data System (ADS)
Sibson, P.; Erven, C.; Godfrey, M.; Miki, S.; Yamashita, T.; Fujiwara, M.; Sasaki, M.; Terai, H.; Tanner, M. G.; Natarajan, C. M.; Hadfield, R. H.; O'Brien, J. L.; Thompson, M. G.
2017-02-01
Improvement in secure transmission of information is an urgent need for governments, corporations and individuals. Quantum key distribution (QKD) promises security based on the laws of physics and has rapidly grown from proof-of-concept to robust demonstrations and deployment of commercial systems. Despite these advances, QKD has not been widely adopted, and large-scale deployment will likely require chip-based devices for improved performance, miniaturization and enhanced functionality. Here we report low error rate, GHz clocked QKD operation of an indium phosphide transmitter chip and a silicon oxynitride receiver chip--monolithically integrated devices using components and manufacturing processes from the telecommunications industry. We use the reconfigurability of these devices to demonstrate three prominent QKD protocols--BB84, Coherent One Way and Differential Phase Shift--with performance comparable to state-of-the-art. These devices, when combined with integrated single photon detectors, pave the way for successfully integrating QKD into future telecommunications networks.
Chip-based quantum key distribution
Sibson, P.; Erven, C.; Godfrey, M.; Miki, S.; Yamashita, T.; Fujiwara, M.; Sasaki, M.; Terai, H.; Tanner, M. G.; Natarajan, C. M.; Hadfield, R. H.; O'Brien, J. L.; Thompson, M. G.
2017-01-01
Improvement in secure transmission of information is an urgent need for governments, corporations and individuals. Quantum key distribution (QKD) promises security based on the laws of physics and has rapidly grown from proof-of-concept to robust demonstrations and deployment of commercial systems. Despite these advances, QKD has not been widely adopted, and large-scale deployment will likely require chip-based devices for improved performance, miniaturization and enhanced functionality. Here we report low error rate, GHz clocked QKD operation of an indium phosphide transmitter chip and a silicon oxynitride receiver chip—monolithically integrated devices using components and manufacturing processes from the telecommunications industry. We use the reconfigurability of these devices to demonstrate three prominent QKD protocols—BB84, Coherent One Way and Differential Phase Shift—with performance comparable to state-of-the-art. These devices, when combined with integrated single photon detectors, pave the way for successfully integrating QKD into future telecommunications networks. PMID:28181489
Zigler, S.J.; Newton, T.J.; Steuer, J.J.; Bartsch, M.R.; Sauer, J.S.
2008-01-01
Interest in understanding physical and hydraulic factors that might drive distribution and abundance of freshwater mussels has been increasing due to their decline throughout North America. We assessed whether the spatial distribution of unionid mussels could be predicted from physical and hydraulic variables in a reach of the Upper Mississippi River. Classification and regression tree (CART) models were constructed using mussel data compiled from various sources and explanatory variables derived from GIS coverages. Prediction success of CART models for presence-absence of mussels ranged from 71 to 76% across three gears (brail, sled-dredge, and dive-quadrat) and 51% of the deviance in abundance. Models were largely driven by shear stress and substrate stability variables, but interactions with simple physical variables, especially slope, were also important. Geospatial models, which were based on tree model results, predicted few mussels in poorly connected backwater areas (e.g., floodplain lakes) and the navigation channel, whereas main channel border areas with high geomorphic complexity (e.g., river bends, islands, side channel entrances) and small side channels were typically favorable to mussels. Moreover, bootstrap aggregation of discharge-specific regression tree models of dive-quadrat data indicated that variables measured at low discharge were about 25% more predictive (PMSE = 14.8) than variables measured at median discharge (PMSE = 20.4) with high discharge (PMSE = 17.1) variables intermediate. This result suggests that episodic events such as droughts and floods were important in structuring mussel distributions. Although the substantial mussel and ancillary data in our study reach is unusual, our approach to develop exploratory statistical and geospatial models should be useful even when data are more limited. ?? 2007 Springer Science+Business Media B.V.
NASA Astrophysics Data System (ADS)
Pasang, T.; Ranganathaiah, C.
2015-06-01
The technique of imprinting molecules of various sizes in a stable structure of polymer matrix has derived multitudes of applications. Once the template molecule is extracted from the polymer matrix, it leaves behind a cavity which is physically (size and shape) and chemically (functional binding site) compatible to the particular template molecule. Positron Annihilation Lifetime Spectroscopy (PALS) is a well known technique to measure cavity sizes precisely in the nanoscale and is not being used in the field of MIPs effectively. This method is capable of measuring nanopores and hence suitable to understand the physical selectivity of the MIPs better. With this idea in mind, we have prepared molecular imprinted polymers (MIPs) with methacrylicacid (MAA) as monomer and EGDMA as cross linker in different molar ratio for three different size template molecules, viz. 4-Chlorophenol (4CP)(2.29 Å), 2-Nephthol (2NP) (3.36 Å) and Phenolphthalein (PP) (4.47Å). FTIR and the dye chemical reactions are used to confirm the complete extraction of the template molecules from the polymer matrix. The free volume size and its distribution have been derived from the measured o-Ps lifetime spectra. Based on the free volume distribution analysis, the percentage of functional cavities for the three template molecules are determined. Percentage of functional binding cavities for 4-CP molecules has been found out to be 70.2% and the rest are native cavities. Similarly for 2NP it is 81.5% and nearly 100% for PP. Therefore, PALS method proves to be very precise and accurate for determining the physical selectivity of MIPs.
NASA Astrophysics Data System (ADS)
Aouaini, F.; Knani, S.; Ben Yahia, M.; Ben Lamine, A.
2015-08-01
Water sorption isotherms of foodstuffs are very important in different areas of food science engineering such as for design, modeling and optimization of many processes. The equilibrium moisture content is an important parameter in models used to predict changes in the moisture content of a product during storage. A formulation of multilayer model with two energy levels was based on statistical physics and theoretical considerations. Thanks to the grand canonical ensemble in statistical physics. Some physicochemical parameters related to the adsorption process were introduced in the analytical model expression. The data tabulated in literature of water adsorption at different temperatures on: chickpea seeds, lentil seeds, potato and on green peppers were described applying the most popular models applied in food science. We also extend the study to the newest proposed model. It is concluded that among studied models the proposed model seems to be the best for description of data in the whole range of relative humidity. By using our model, we were able to determine the thermodynamic functions. The measurement of desorption isotherms, in particular a gas over a solid porous, allows access to the distribution of pore size PSD.
NASA Astrophysics Data System (ADS)
Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf
2018-04-01
A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.
Key issues and technical route of cyber physical distribution system
NASA Astrophysics Data System (ADS)
Zheng, P. X.; Chen, B.; Zheng, L. J.; Zhang, G. L.; Fan, Y. L.; Pei, T.
2017-01-01
Relying on the National High Technology Research and Development Program, this paper introduced the key issues in Cyber Physical Distribution System (CPDS), mainly includes: composite modelling method and interaction mechanism, system planning method, security defence technology, distributed control theory. Then on this basis, the corresponding technical route is proposed, and a more detailed research framework along with main schemes to be adopted is also presented.
Curved trajectories of actin-based motility in two dimensions
NASA Astrophysics Data System (ADS)
Wen, Fu-Lai; Leung, Kwan-tai; Chen, Hsuan-Yi
2012-05-01
Recent experiments have reported fascinating geometrical trajectories for actin-based motility of bacteria Listeria monocytogenes and functionalized beads. To understand the physical mechanism for these trajectories, we constructed a phenomenological model to study the motion of an actin-propelled disk in two dimensions. In our model, the force and actin density on the surface of the disk are influenced by the translation and rotation of the disk, which in turn is induced by the asymmetric distributions of those densities. We show that this feedback can destabilize a straight trajectory, leading to circular, S-shape and other geometrical trajectories observed in the experiments through bifurcations in the distributions of the force and actin density. The relation between our model and the models for self-propelled deformable particles is emphasized and discussed.
Charge fluctuations in nanoscale capacitors.
Limmer, David T; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A; van Roij, René; Rotenberg, Benjamin
2013-09-06
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
Charge Fluctuations in Nanoscale Capacitors
NASA Astrophysics Data System (ADS)
Limmer, David T.; Merlet, Céline; Salanne, Mathieu; Chandler, David; Madden, Paul A.; van Roij, René; Rotenberg, Benjamin
2013-09-01
The fluctuations of the charge on an electrode contain information on the microscopic correlations within the adjacent fluid and their effect on the electronic properties of the interface. We investigate these fluctuations using molecular dynamics simulations in a constant-potential ensemble with histogram reweighting techniques. This approach offers, in particular, an efficient, accurate, and physically insightful route to the differential capacitance that is broadly applicable. We demonstrate these methods with three different capacitors: pure water between platinum electrodes and a pure as well as a solvent-based organic electrolyte each between graphite electrodes. The total charge distributions with the pure solvent and solvent-based electrolytes are remarkably Gaussian, while in the pure ionic liquid the total charge distribution displays distinct non-Gaussian features, suggesting significant potential-driven changes in the organization of the interfacial fluid.
Physics-based simulation of EM and SM in TSV-based 3D IC structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kteyan, Armen; Sukharev, Valeriy; Zschech, Ehrenfried
2014-06-19
Evolution of stresses in through-silicon-vias (TSVs) and in the TSV landing pad due to the stress migration (SM) and electromigration (EM) phenomena are considered. It is shown that an initial stress distribution existing in a TSV depends on its architecture and copper fill technology. We demonstrate that in the case of proper copper annealing the SM-induced redistribution of atoms results in uniform distributions of the hydrostatic stress and concentration of vacancies along each segment. In this case, applied EM stressing generates atom migration that is characterized by kinetics depending on the preexisting equilibrium concentration of vacancies. Stress-induced voiding in TSVmore » is considered. EM induced voiding in TSV landing pad is analyzed in details.« less
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
NASA Astrophysics Data System (ADS)
Zhang, X.-J.; Li, W.; Thorne, R. M.; Angelopoulos, V.; Ma, Q.; Li, J.; Bortnik, J.; Nishimura, Y.; Chen, L.; Baker, D. N.; Reeves, G. D.; Spence, H. E.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Blake, J. B.; Fennell, J. F.
2016-09-01
Three mechanisms have been proposed to explain relativistic electron flux depletions (dropouts) in the Earth's outer radiation belt during storm times: adiabatic expansion of electron drift shells due to a decrease in magnetic field strength, magnetopause shadowing and subsequent outward radial diffusion, and precipitation into the atmosphere (driven by EMIC wave scattering). Which mechanism predominates in causing electron dropouts commonly observed in the outer radiation belt is still debatable. In the present study, we evaluate the physical mechanism that may be primarily responsible for causing the sudden change in relativistic electron pitch angle distributions during a dropout event observed by Van Allen Probes during the main phase of the 27 February 2014 storm. During this event, the phase space density of ultrarelativistic (>1 MeV) electrons was depleted by more than 1 order of magnitude over the entire radial extent of the outer radiation belt (3 < L* < 5) in less than 6 h after the passage of an interplanetary shock. We model the electron pitch angle distribution under a compressed magnetic field topology based on actual solar wind conditions. Although these ultrarelativistic electrons exhibit highly anisotropic (peaked in 90°), energy-dependent pitch angle distributions, which appear to be associated with the typical EMIC wave scattering, comparison of the modeled electron distribution to electron measurements indicates that drift shell splitting is responsible for this rapid change in electron pitch angle distributions. This further indicates that magnetopause loss is the predominant cause of the electron dropout right after the shock arrival.
NASA Technical Reports Server (NTRS)
Trimble, Jay
2017-01-01
For NASA's Resource Prospector (RP) Lunar Rover Mission, we are moving away from a control center concept, to a fully distributed operation utilizing control nodes, with decision support from anywhere via mobile devices. This operations concept will utilize distributed information systems, notifications, mobile data access, and optimized mobile data display for off-console decision support. We see this concept of operations as a step in the evolution of mission operations from a central control center concept to a mission operations anywhere concept. The RP example is part of a trend, in which mission expertise for design, development and operations is distributed across countries and across the globe. Future spacecraft operations will be most cost efficient and flexible by following this distributed expertise, enabling operations from anywhere. For the RP mission we arrived at the decision to utilize a fully distributed operations team, where everyone operates from their home institution, based on evaluating the following factors: the requirement for physical proximity for near-real time command and control decisions; the cost of distributed control nodes vs. a centralized control center; the impact on training and mission preparation of flying the team to a central location. Physical proximity for operational decisions is seldom required, though certain categories of decisions, such as launch abort, or close coordination for mission or safety-critical near-real-time command and control decisions may benefit from co-location. The cost of facilities and operational infrastructure has not been found to be a driving factor for location in our studies. Mission training and preparation benefit from having all operators train and operate from home institutions.
Derived distribution of floods based on the concept of partial area coverage with a climatic appeal
NASA Astrophysics Data System (ADS)
Iacobellis, Vito; Fiorentino, Mauro
2000-02-01
A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.
Measuring experimental cyclohexane-water distribution coefficients for the SAMPL5 challenge
NASA Astrophysics Data System (ADS)
Rustenburg, Ariën S.; Dancer, Justin; Lin, Baiwei; Feng, Jianwen A.; Ortwine, Daniel F.; Mobley, David L.; Chodera, John D.
2016-11-01
Small molecule distribution coefficients between immiscible nonaqueuous and aqueous phases—such as cyclohexane and water—measure the degree to which small molecules prefer one phase over another at a given pH. As distribution coefficients capture both thermodynamic effects (the free energy of transfer between phases) and chemical effects (protonation state and tautomer effects in aqueous solution), they provide an exacting test of the thermodynamic and chemical accuracy of physical models without the long correlation times inherent to the prediction of more complex properties of relevance to drug discovery, such as protein-ligand binding affinities. For the SAMPL5 challenge, we carried out a blind prediction exercise in which participants were tasked with the prediction of distribution coefficients to assess its potential as a new route for the evaluation and systematic improvement of predictive physical models. These measurements are typically performed for octanol-water, but we opted to utilize cyclohexane for the nonpolar phase. Cyclohexane was suggested to avoid issues with the high water content and persistent heterogeneous structure of water-saturated octanol phases, since it has greatly reduced water content and a homogeneous liquid structure. Using a modified shake-flask LC-MS/MS protocol, we collected cyclohexane/water distribution coefficients for a set of 53 druglike compounds at pH 7.4. These measurements were used as the basis for the SAMPL5 Distribution Coefficient Challenge, where 18 research groups predicted these measurements before the experimental values reported here were released. In this work, we describe the experimental protocol we utilized for measurement of cyclohexane-water distribution coefficients, report the measured data, propose a new bootstrap-based data analysis procedure to incorporate multiple sources of experimental error, and provide insights to help guide future iterations of this valuable exercise in predictive modeling.
Role of particle radiotherapy in the management of head and neck cancer.
Laramore, George E
2009-05-01
Modern imaging techniques and powerful computers allow a radiation oncologist to design treatments delivering higher doses of radiation than previously possible. Dose distributions imposed by the physics of 'standard' photon and electron beams limit further dose escalation. Hadron radiotherapy offers advantages in either dose distribution and/or improved radiobiology that may significantly improve the treatment of certain head and neck malignancies. Clinical studies support the effectiveness of fast-neutron radiotherapy in the treatment of major and minor salivary gland tumors. Data show highly favorable outcomes with proton radiotherapy for skull-base malignancies and tumors near highly critical normal tissues compared with that expected with standard radiotherapy. Heavy-ion radiotherapy clinical studies are mainly being conducted with fully stripped carbon ions, and limited data seem to indicate a possible improvement over proton radiotherapy for the same subset of radioresistant tumors where neutrons show a benefit over photons. Fast-neutron radiotherapy has different radiobiological properties compared with standard radiotherapy but similar depth dose distributions. Its role in the treatment of head and neck cancer is currently limited to salivary gland malignancies and certain radioresistant tumors such as sarcomas. Protons have the same radiobiological properties as standard radiotherapy beams but more optimal depth dose distributions, making it particularly advantageous when treating tumors adjacent to highly critical structures. Heavy ions combine the radiobiological properties of fast neutrons with the physical dose distributions of protons, and preliminary data indicate their utility for radioresistant tumors adjacent to highly critical structures.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
Statistical physics of media processes: Mediaphysics
NASA Astrophysics Data System (ADS)
Kuznetsov, Dmitri V.; Mandel, Igor
2007-04-01
The processes of mass communications in complicated social or sociobiological systems such as marketing, economics, politics, animal populations, etc. as a subject for the special scientific subbranch-“mediaphysics”-are considered in its relation with sociophysics. A new statistical physics approach to analyze these phenomena is proposed. A keystone of the approach is an analysis of population distribution between two or many alternatives: brands, political affiliations, or opinions. Relative distances between a state of a “person's mind” and the alternatives are measures of propensity to buy (to affiliate, or to have a certain opinion). The distribution of population by those relative distances is time dependent and affected by external (economic, social, marketing, natural) and internal (influential propagation of opinions, “word of mouth”, etc.) factors, considered as fields. Specifically, the interaction and opinion-influence field can be generalized to incorporate important elements of Ising-spin-based sociophysical models and kinetic-equation ones. The distributions were described by a Schrödinger-type equation in terms of Green's functions. The developed approach has been applied to a real mass-media efficiency problem for a large company and generally demonstrated very good results despite low initial correlations of factors and the target variable.
Pressure potential and stability analysis in an acoustical noncontact transportation
NASA Astrophysics Data System (ADS)
Li, J.; Liu, C. J.; Zhang, W. J.
2017-01-01
Near field acoustic traveling wave is one of the most popular principles in noncontact manipulations and transportations. The stability behavior is a key factor in the industrial applications of acoustical noncontact transportation. We present here an in-depth analysis of the transportation stability of a planar object levitated in near field acoustic traveling waves. To more accurately describe the pressure distributions on the radiation surface, a 3D nonlinear traveling wave model is presented. A closed form solution is derived based on the pressure potential to quantitatively calculate the restoring forces and moments under small disturbances. The physical explanations of the effects of fluid inertia and the effects of non-uniform pressure distributions are provided in detail. It is found that a vibration rail with tapered cross section provides more stable transportation than a rail with rectangular cross section. The present study sheds light on the issue of quantitative evaluation of stability in acoustic traveling waves and proposes three main factors that influence the stability: (a) vibration shape, (b) pressure distribution and (c) restoring force/moment. It helps to provide a better understanding of the physics behind the near field acoustic transportation and provide useful design and optimization tools for industrial applications.
Gas hydrate saturation and distribution in the Kumano Forearc Basin of the Nankai Trough
NASA Astrophysics Data System (ADS)
Jia, Jihui; Tsuji, Takeshi; Matsuoka, Toshifumi
2017-02-01
The Kumano Forearc Basin is located to the south-east of the Kii Peninsula, Japan, overlying the accretionary prism in the Nankai Trough. The presence of gas hydrate in submarine sediments of the forearc basin has resulted in the widespread occurrence of bottom simulating reflectors (BSRs) on seismic profiles, and has caused distinct anomalies in logging data in the region. We estimated the in situ gas hydrate saturation from logging data by using three methods: effective rock physics models, Archie's equation, and empirical relationships between acoustic impedance (AI) and water-filled porosity. The results derived from rock physics models demonstrate that gas hydrates are attached to the grain surfaces of the rock matrix and are not floating in pore space. By applying the empirical relationships to the AI distribution derived from model-based AI inversion of the three-dimensional (3D) seismic data, we mapped the spatial distribution of hydrate saturation within the Kumano Basin and characterised locally concentrated gas hydrates. Based on the results, we propose two different mechanisms of free gas supply to explain the process of gas hydrate formation in the basin: (1) migration along inclined strata that dip landwards, and (2) migration through the faults or cracks generated by intensive tectonic movements of the accretionary prism. The dipping strata with relatively low AI in the forearc basin could indicate the presence of hydrate formation due to gas migration along the dipping strata. However, high hydrate concentration is observed at fault zones with high pore pressures, thus the second mechanism likely plays an important role in the genesis of gas hydrates in the Kumano Basin. Therefore, the tectonic activities in the accretionary wedge significantly influence the hydrate saturation and distribution in the Kumano Forearc Basin.
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
NASA Astrophysics Data System (ADS)
Green, Daniel; Pattison, Ian; Yu, Dapeng
2016-04-01
Surface water (pluvial) flooding occurs when rainwater from intense precipitation events is unable to infiltrate into the subsurface or drain via natural or artificial drainage channels. Surface water flooding poses a serious hazard to urban areas across the world, with the UK's perceived risk appearing to have increased in recent years due to surface water flood events seeming more severe and frequent. Surface water flood risk currently accounts for 1/3 of all UK flood risk, with approximately two million people living in urban areas at risk of a 1 in 200-year flood event. Research often focuses upon using numerical modelling techniques to understand the extent, depth and severity of actual or hypothetical flood scenarios. Although much research has been conducted using numerical modelling, field data available for model calibration and validation is limited due to the complexities associated with data collection in surface water flood conditions. Ultimately, the data which numerical models are based upon is often erroneous and inconclusive. Physical models offer a novel, alternative and innovative environment to collect data within, creating a controlled, closed system where independent variables can be altered independently to investigate cause and effect relationships. A physical modelling environment provides a suitable platform to investigate rainfall-runoff processes occurring within an urban catchment. Despite this, physical modelling approaches are seldom used in surface water flooding research. Scaled laboratory experiments using a 9m2, two-tiered 1:100 physical model consisting of: (i) a low-cost rainfall simulator component able to simulate consistent, uniformly distributed (>75% CUC) rainfall events of varying intensity, and; (ii) a fully interchangeable, modular plot surface have been conducted to investigate and quantify the influence of a number of terrestrial and meteorological factors on overland flow and rainfall-runoff patterns within a modelled urban setting. Terrestrial factors investigated include altering the physical model's catchment slope (0°- 20°), as well as simulating a number of spatially-varied impermeability and building density/configuration scenarios. Additionally, the influence of different storm dynamics and intensities were investigated. Preliminary results demonstrate that rainfall-runoff responses in the physical modelling environment are highly sensitive to slight increases in catchment gradient and rainfall intensity and that more densely distributed building layouts significantly increase peak flows recorded at the physical model outflow when compared to sparsely distributed building layouts under comparable simulated rainfall conditions.
Nursing home resident smoking policies.
Stefanacci, Richard G; Lester, Paula E; Kohen, Izchak
2008-01-01
To identify nursing home standards related to resident smoking through a nation wide survey of directors of nursing. A national survey was distributed online and was completed by 248 directors of nursing. The directors of nurses answered questions concerning resident smoking including the criteria utilized to determine an unsafe resident smoker. For those residents identified as unsafe, the questions asked were specifically related to monitoring, staff involvement, safety precautions and policy. The results of the survey demonstrated a consistent policy practiced among facilities across the United States. The monitoring of nursing home residents is based on a resident's mental acuity, physical restrictions and equipment requirements. Once a resident was identified as a smoker at risk of harm to self or others, staff involvement ranged from distributing cigarettes to direct supervision. In addition, the majority of facilities required residents to wear fire resistant aprons and provided a fire extinguisher in smoking areas. Monitoring policies of nursing home residents who smoke starts with identifying those residents at risk based on an assessment of mental acuity, physical restrictions and equipment requirements. Those that are identified as being at risk smokers have their cigarettes controlled and distributed by nursing staff and are supervised by facility staff when smoking. This policy is implemented through written policy as well as staff education. Despite some discrepancies in the actual implementation of policies to supervise residents who smoke, the policies for assessment for at-risk smokers requiring monitoring is consistent on a national basis.
NASA Astrophysics Data System (ADS)
Chang, Longfei; Asaka, Kinji; Zhu, Zicai; Wang, Yanjie; Chen, Hualing; Li, Dichen
2014-06-01
Ionic Polymer-Metal Composite (IPMC) has been well-documented of being a promising functional material in extensive applications. In its most popular and traditional manufacturing technique, roughening is a key process to ensure a satisfying performance. In this paper, based on a lately established multi-physical model, the effect of roughening process on the inner mass transportation and the electro-active output of IPMC were investigated. In the model, the electro-chemical field was monitored by Poisson equation and a properly simplified Nernst-Planck equation set, while the mechanical field was evaluated on the basis of volume strain effect. Furthermore, with Ramo-Shockley theorem, the out-circuit current and accumulated charge on the electrode were bridged with the inner cation distribution. Besides, nominal current and charge density as well as the curvature of the deformation were evaluated to characterize the performance of IPMC. The simulation was implemented by Finite Element Method with Comsol Multi-physics, based on two groups of geometrical models, those with various rough interface and those with different thickness. The results of how the roughening impact influences on the performance of IPMC were discussed progressively in three aspects, steady-state distribution of local potential and mass concentration, current response and charge accumulation, as well as the curvature of deformation. Detailed explanations for the performance improvement resulted from surface roughening were provided from the micro-distribution point of view, which can be further explored for the process optimization of IPMC.
Nondestructive optical testing of the materials surface structure based on liquid crystals
NASA Astrophysics Data System (ADS)
Tomilin, M. G.; Stafeev, S. K.
2011-08-01
Thin layers of nematic liquid crystals (NLCs) may be used as recording media for visualizing structural and microrelief defects, distribution of low power physical fields and modifications of the surface. NLCs are more sensitive in comparison with cholesteric and smectic LCs having super molecular structures. The detecting properties of NLCs are based on local layers deformation, induced by surface fields and observed in polarizing microscope. The structural surface defects or physical field's distribution are dramatically change the distribution of surface tension. Surface defects recording becomes possible if NLC deformed structure is illuminated in transparent or reflective modes and observed in optical polarizing microscope and appearing image is compared with background structure. In this case one observes not the real defect but the local deformation in NLCs. The theory was developed to find out the real size of defects. The resolution of NLC layer is more than 2000 lines/mm. The fields of NLC application are solid crystals symmetry, minerals, metals, semiconductors, polymers and glasses structure inhomogeneities and optical coatings defects detecting. The efficiency of NLC method in biophotonics is illustrated by objective detecting cancer tissues character and visualizing the interaction traces of grippe viruses with antibodies. NLCs may detect solvent components structure in tea, wine and perfume giving unique information of their structure. It presents diagnostic information alternative to dyes and fluorescence methods. For the first time the structures of some juices and beverages are visualized to illustrate the unique possibilities of NLCs.
A narrow-band k-distribution model with single mixture gas assumption for radiative flows
NASA Astrophysics Data System (ADS)
Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon
2018-06-01
In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.
Dynamic Involvement of Real World Objects in the IoT: A Consensus-Based Cooperation Approach
Pilloni, Virginia; Atzori, Luigi; Mallus, Matteo
2017-01-01
A significant role in the Internet of Things (IoT) will be taken by mobile and low-cost unstable devices, which autonomously self-organize and introduce highly dynamic and heterogeneous scenarios for the deployment of distributed applications. This entails the devices to cooperate to dynamically find the suitable combination of their involvement so as to improve the system reliability while following the changes in their status. Focusing on the above scenario, we propose a distributed algorithm for resources allocation that is run by devices that can perform the same task required by the applications, allowing for a flexible and dynamic binding of the requested services with the physical IoT devices. It is based on a consensus approach, which maximizes the lifetime of groups of nodes involved and ensures the fulfillment of the requested Quality of Information (QoI) requirements. Experiments have been conducted with real devices, showing an improvement of device lifetime of more than 20%, with respect to a uniform distribution of tasks. PMID:28257030
NASA Astrophysics Data System (ADS)
Hughes, Richard
2004-05-01
Quantum key distribution (QKD) uses single-photon communications to generate the shared, secret random number sequences that are used to encrypt and decrypt secret communications. The unconditional security of QKD is based on the interplay between fundamental principles of quantum physics and information theory. An adversary can neither successfully tap the transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). QKD could be particularly attractive for free-space optical communications, both ground-based and for satellites. I will describe a QKD experiment performed over multi-kilometer line-of-sight paths, which serves as a model for a satellite-to-ground key distribution system. The system uses single-photon polarization states, without active polarization switching, and for the first time implements the complete BB84 QKD protocol including, reconciliation, privacy amplification and the all-important authentication stage. It is capable of continuous operation throughout the day and night, achieving the self-sustaining production of error-free, shared, secret bits. I will also report on the results of satellite-to-ground QKD modeling.
Dynamic Involvement of Real World Objects in the IoT: A Consensus-Based Cooperation Approach.
Pilloni, Virginia; Atzori, Luigi; Mallus, Matteo
2017-03-01
A significant role in the Internet of Things (IoT) will be taken by mobile and low-cost unstable devices, which autonomously self-organize and introduce highly dynamic and heterogeneous scenarios for the deployment of distributed applications. This entails the devices to cooperate to dynamically find the suitable combination of their involvement so as to improve the system reliability while following the changes in their status. Focusing on the above scenario, we propose a distributed algorithm for resources allocation that is run by devices that can perform the same task required by the applications, allowing for a flexible and dynamic binding of the requested services with the physical IoT devices. It is based on a consensus approach, which maximizes the lifetime of groups of nodes involved and ensures the fulfillment of the requested Quality of Information (QoI) requirements. Experiments have been conducted with real devices, showing an improvement of device lifetime of more than 20 % , with respect to a uniform distribution of tasks.
NASA Astrophysics Data System (ADS)
Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.
2014-12-01
Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-06-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-01-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370
NASA Technical Reports Server (NTRS)
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
NASA Astrophysics Data System (ADS)
Carlotti, F.; Espinasse, B.; Zhou, M.; Jean-Luc, D.
2016-02-01
Environmental conditions and zooplankton size structure and taxonomic diversity were investigated in the Gulf of Lion in May 2010 and January 2011. The integrated physical and biological measurements provided a 3D view with high spatial resolution of the physical and biological variables and their correlations over the whole gulf. The effects of physical processes such as freshwater input, coastal upwelling, and water column mixing by winds on phytoplankton and zooplankton distributions were analyzed using these data. Several analytic tests were performed in order to define several ecoregions representing different habitats of plankton communities. Three habitats were distinguished based on statistical analysis performed on biological and physical variables: (1) the coastal area characterized by shallow waters, high chl a concentrations, and a steep slope of the normalized biomass size spectrum (NBSS); (2) the area affected by the Rhône with high stratification and flat NBSS slope; and (3) the continental shelf with a deep mixed layer, relatively low particle concentrations, and moderate NBSS slope. The zooplankton diversity was characterized by spatial differences in community composition among the Rhône plume area, the coastal shelf, and shelf break waters. Defining habitat is a relevant approach to designing new zooplankton sampling strategies, validating distribution models and including the zooplankton compartment in trophodynamic studies.
Exploring the Concept of Leadership from the Perspective of Physical Therapists in Canada
Nanavaty, Gargi; Ryan, Jeremy; Howell, Phillip; Sunder, Rana; Macdonald, Allan A.; Schleifer Taylor, Jackie; Verrier, Molly C.
2012-01-01
ABSTRACT Purpose: To explore the concept of leadership from the perspective of physical therapists in Canada. Methods: A quantitative, cross-sectional nationwide study was performed using a Web-based survey distributed to all members of the Canadian Physiotherapy Association (CPA) with a registered e-mail address (n=6,156). Frequency distributions and percentages were obtained for all leadership characteristics, and chi-square tests were performed, with significance set at p<0.05. Results: A total of 1,875 members responded, for a 30% response rate. Communication, professionalism, and credibility were rated as extremely important leadership characteristics by the majority of respondents across all three settings (workplace, health care system, and society); practising in the private sector contributed significantly to the perceived importance of business acumen (p<0.001). Overall, 79.6% of respondents self-declared as leaders; male gender, primary work facility in private practice or educational institution, and supervision of students were factors associated with self-declaration as a leader. Conclusions: The top three characteristics that physical therapists perceive as important differ from those reported among other health care professions. Further research is required to understand whether the presence of multiple health care professionals in an acute-care setting facilitates leadership opportunities or whether physical therapists feel overshadowed. Future studies should also investigate whether individuals practising outside the private sector recognize the business aspects of their workplace. PMID:23997391
Distribution-Connected PV's Response to Voltage Sags at Transmission-Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry; Ding, Fei
The ever increasing amount of residential- and commercial-scale distribution-connected PV generation being installed and operated on the U.S.'s electric power system necessitates the use of increased fidelity representative distribution system models for transmission stability studies in order to ensure the continued safe and reliable operation of the grid. This paper describes a distribution model-based analysis that determines the amount of distribution-connected PV that trips off-line for a given voltage sag seen at the distribution circuit's substation. Such sags are what could potentially be experienced over a wide area of an interconnection during a transmission-level line fault. The results of thismore » analysis show that the voltage diversity of the distribution system does cause different amounts of PV generation to be lost for differing severity of voltage sags. The variation of the response is most directly a function of the loading of the distribution system. At low load levels the inversion of the circuit's voltage profile results in considerable differences in the aggregated response of distribution-connected PV Less variation is seen in the response to specific PV deployment scenarios, unless pushed to extremes, and in the total amount of PV penetration attained. A simplified version of the combined CMPLDW and PVD1 models is compared to the results from the model-based analysis. Furthermore, the parameters of the simplified model are tuned to better match the determined response. The resulting tuning parameters do not match the expected physical model of the distribution system and PV systems and thus may indicate that another modeling approach would be warranted.« less
NASA Astrophysics Data System (ADS)
Peck, Myron A.; Arvanitidis, Christos; Butenschön, Momme; Canu, Donata Melaku; Chatzinikolaou, Eva; Cucco, Andrea; Domenici, Paolo; Fernandes, Jose A.; Gasche, Loic; Huebert, Klaus B.; Hufnagl, Marc; Jones, Miranda C.; Kempf, Alexander; Keyl, Friedemann; Maar, Marie; Mahévas, Stéphanie; Marchal, Paul; Nicolas, Delphine; Pinnegar, John K.; Rivot, Etienne; Rochette, Sébastien; Sell, Anne F.; Sinerchia, Matteo; Solidoro, Cosimo; Somerfield, Paul J.; Teal, Lorna R.; Travers-Trolet, Morgan; van de Wolfshaar, Karen E.
2018-02-01
We review and compare four broad categories of spatially-explicit modelling approaches currently used to understand and project changes in the distribution and productivity of living marine resources including: 1) statistical species distribution models, 2) physiology-based, biophysical models of single life stages or the whole life cycle of species, 3) food web models, and 4) end-to-end models. Single pressures are rare and, in the future, models must be able to examine multiple factors affecting living marine resources such as interactions between: i) climate-driven changes in temperature regimes and acidification, ii) reductions in water quality due to eutrophication, iii) the introduction of alien invasive species, and/or iv) (over-)exploitation by fisheries. Statistical (correlative) approaches can be used to detect historical patterns which may not be relevant in the future. Advancing predictive capacity of changes in distribution and productivity of living marine resources requires explicit modelling of biological and physical mechanisms. New formulations are needed which (depending on the question) will need to strive for more realism in ecophysiology and behaviour of individuals, life history strategies of species, as well as trophodynamic interactions occurring at different spatial scales. Coupling existing models (e.g. physical, biological, economic) is one avenue that has proven successful. However, fundamental advancements are needed to address key issues such as the adaptive capacity of species/groups and ecosystems. The continued development of end-to-end models (e.g., physics to fish to human sectors) will be critical if we hope to assess how multiple pressures may interact to cause changes in living marine resources including the ecological and economic costs and trade-offs of different spatial management strategies. Given the strengths and weaknesses of the various types of models reviewed here, confidence in projections of changes in the distribution and productivity of living marine resources will be increased by assessing model structural uncertainty through biological ensemble modelling.
Physical control of the distributions of a key Arctic copepod in the Northeast Chukchi Sea
NASA Astrophysics Data System (ADS)
Elliott, Stephen M.; Ashjian, Carin J.; Feng, Zhixuan; Jones, Benjamin; Chen, Changsheng; Zhang, Yu
2017-10-01
The Chukchi Sea is a highly advective regime dominated by a barotropically driven northward flow modulated by wind driven currents that reach the bottom boundary layer of this shallow environment. A general northward gradient of decreasing temperature and food concentration leads to geographically divergent copepod growth and development rates between north and south. The physics of this system establish the biological connection potential between specific regions. The copepod Calanus glacialis is a key grazer, predator, and food source in Arctic shelf seas. Its summer distribution and abundance have direct effects on much of the food web, from phytoplankton to migrating bowhead whales. In August 2012 and 2013, C. glacialis distributions were quantified over Hanna Shoal in the northeast Chukchi Sea. Here an individual-based model with Lagrangian tracking and copepod life stage development capabilities is used to advect and develop these distributions forward and backward in time to determine the source (production locations) and sink (potential overwintering locations) regions of the transient Hanna Shoal C. glacialis population. Hanna Shoal supplies diapause competent C. glacialis to both the Beaufort Slope and the Chukchi Cap, mainly receives juveniles from the broad slope between Hanna Shoal and Herald Valley and receives second year adults from as far south as the Anadyr Gulf and as near as the broad slope between Hanna Shoal and Herald Valley. The 2013 sink region was shifted west relative to the 2012 region and the 2013 adult source region was shifted north relative to the 2012 adult source region. These connection potentials were not sensitive to precise times and locations of release, but were quite sensitive to depth of release. These patterns demonstrate how interannual differences in the physical conditions well south of Hanna Shoal play a critical role in determining the abundance and distribution of a key food source over Hanna Shoal and in the southern Beaufort Sea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less
A validation study of a stochastic model of human interaction
NASA Astrophysics Data System (ADS)
Burchfield, Mitchel Talmadge
The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.
Brennan, Gerard P; Hunter, Stephen J; Snow, Greg; Minick, Kate I
2017-12-01
The Centers for Medicare and Medicaid Services (CMS) require physical therapists document patients' functional limitations. The process is not standardized. A systematic approach to determine a patient's functional limitations and responsiveness to change is needed. The purpose of this study is to compare patient-reported outcomes (PROs) responsiveness to change using 7-level severity/complexity modifier scale proposed by Medicare to a derived scale implemented by Intermountain Healthcare's Rehabilitation Outcomes Management System (ROMS). This was a retrospective, observational cohort design. 165,183 PROs prior to July 1, 2013, were compared to 46,334 records from July 1, 2013, to December 31, 2015. Histograms and ribbon plots illustrate distribution and change of patients' scores. ROMS raw score ranges were calculated and compared to CMS' severity/complexity levels based on score percentage. Distribution of the population was compared based on the 2 methods. Sensitivity and specificity were compared for responsiveness to change based on minimal clinically important difference (MCID). Histograms demonstrated few patient scores placed in CMS scale levels at the extremes, whereas the majority of scores placed in 2 middle levels (CJ, CK). ROMS distributed scores more evenly across levels. Ribbon plots illustrated advantage of ROMS' using narrower score ranges. Greater chance for patients to change levels was observed with ROMS when an MCID was achieved. ROMS narrower scale levels resulted in greater sensitivity and good specificity. Geographic representation for the United States was limited. Without patients' global rating of change, a reference standard to gauge validation of improvement could not be provided. ROMS provides a standard approach to identify accurately functional limitation modifier levels and to detect improvement more accurately than a straight across transposition using the CMS scale. © 2017 American Physical Therapy Association
Guattery, Jason M; Dardas, Agnes Z; Kelly, Michael; Chamberlain, Aaron; McAndrew, Christopher; Calfee, Ryan P
2018-04-01
The Patient Reported Outcomes Measurement Information System (PROMIS) was developed to provide valid, reliable, and standardized measures to gather patient-reported outcomes for many health domains, including depression, independent of patient condition. Most studies confirming the performance of these measures were conducted with a consented, volunteer study population for testing. Using a study population that has undergone the process of informed consent may be differentiated from the validation group because they are educated specifically as to the purpose of the questions and they will not have answers recorded in their permanent health record. (1) When given as part of routine practice to an orthopaedic population, do PROMIS Physical Function and Depression item banks produce score distributions different than those produced by the populations used to calibrate and validate the item banks? (2) Does the presence of a nonnormal distribution in the PROMIS Depression scores in a clinical population reflect a deliberately hasty answering of questions by patients? (3) Are patients who are reporting minimal depressive symptoms by scoring the minimum score on the PROMIS Depression Computer Adaptive Testing (CAT) distinct from other patients according to demographic data or their scores on other PROMIS assessments? Univariate descriptive statistics and graphic histograms were used to describe the frequency distribution of scores for the Physical Function and Depression item banks for all orthopaedic patients 18 years or older who had an outpatient visit between June 2015 and December 2016. The study population was then broken into two groups based on whether they indicated a lack of depressive symptoms and scored the minimum score (34.2) on the Depression CAT assessment (Floor Group) or not (Standard Group). The distribution of Physical Function CAT scores was compared between the two groups. Finally, a time-per-question value was calculated for both the Physical Function and Depression CATs and was compared between assessments within each group as well as between the two groups. Bivariate statistics compared the demographic data between the two groups. Physical Function CAT scores in musculoskeletal patients were normally distributed like the distribution calibration population; however, the score distribution of the Depression CAT in musculoskeletal patients was nonnormal with a spike in the floor score. After excluding the floor spike, the distribution of the Depression CAT scores was not different from the population control group. Patients who scored the floor score on the Depression CAT took slightly less time per question for Physical Function CAT when compared with other musculoskeletal patients (floor patients: 11 ± 9 seconds; normally distributed patients: 12 ± 10 seconds; mean difference: 1 second [0.8-1.1]; p < 0.001 but not clinically relevant). They spent a substantially shorter amount of time per question on the Depression CAT (Floor Group: 4 ± 3 seconds; Standard Group: 7 ± 7 seconds; mean difference: 3 [2.9-3.2]; p < 0.001). Patients who scored the minimum score on the PROMIS Depression CAT were younger than other patients (Floor Group: 50 ± 18 SD; Standard Group: 55 ± 16 SD; mean difference: 4.5 [4.2-4.7]; p < 0.001) with a larger percentage of men (Floor Group: 48.8%; Standard Group 40.0%; odds ratio 0.6 [0.6-0.7]; p < 0.001) and minor differences in racial breakdown (Floor Group: white 85.2%, black 11.9%, other 0.03%; Standard Group: white 83.9%, black 13.7%, other 0.02%). In an orthopaedic surgery population that is given PROMIS CAT as part of routine practice, the Physical Function item bank had a normal performance, but there is a group of patients who hastily complete Depression questions producing a strong floor effect and calling into question the validity of those floor scores that indicate minimal depression. Level II, diagnostic study.
Total Water-Vapor Distribution in the Summer Cloudless Atmosphere over the South of Western Siberia
NASA Astrophysics Data System (ADS)
Troshkin, D. N.; Bezuglova, N. N.; Kabanov, M. V.; Pavlov, V. E.; Sokolov, K. I.; Sukovatov, K. Yu.
2017-12-01
The spatial distribution of the total water vapor in different climatic zones of the south of Western Siberia in summer of 2008-2011 is studied on the basis of Envisat data. The correlation analysis of the water-vapor time series from the Envisat data W and radiosonde observations w for the territory of Omsk aerological station show that the absolute values of W and w are linearly correlated with a coefficient of 0.77 (significance level p < 0.05). The distribution functions of the total water vapor are calculated based on the number of its measurements by Envisat for a cloudless sky of three zones with different physical properties of the underlying surface, in particular, steppes to the south of the Vasyugan Swamp and forests to the northeast of the Swamp. The distribution functions are bimodal; each mode follows the lognormal law. The parameters of these functions are given.
The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data
NASA Technical Reports Server (NTRS)
Tesoriero, Roseanne; Zelkowitz, Marvin
1997-01-01
Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.
NASA Astrophysics Data System (ADS)
Yue, C.; Bortnik, J.; Thorne, R. M.; Ma, Q.; An, X.; Chappell, C. R.; Gerrard, A. J.; Lanzerotti, L. J.; Shi, Q.
2017-12-01
Understanding the source and loss processes of various plasma populations is greatly aided by having accurate knowledge of their pitch angle distributions (PADs). Here, we statistically analyze 1 eV to 600 keV hydrogen (H+) PADs near the geomagnetic equator in the inner magnetosphere based on Van Allen Probes measurements, to comprehensively investigate how the H+ PADs vary with different energies, magnetic local times (MLTs), L-shells, and geomagnetic conditions. Our survey clearly indicates four distinct populations with different PADs: (1) a pancake distribution of the plasmaspheric H+ at low L-shells except for dawn sector; (2) a bi-directional field-aligned distribution of the warm plasma cloak; (3) pancake or isotropic distributions of ring current H+; (4) radiation belt particles show pancake, butterfly and isotropic distributions depending on their energy, MLT and L-shell. Meanwhile, the pancake distribution of ring current H+ moves to lower energies as L-shell increases which is primarily caused by adiabatic transport. Furthermore, energetic H+ (> 10 keV) PADs become more isotropic following the substorm injections, indicating wave-particle interactions. The radiation belt H+ butterfly distributions are identified in a narrow energy range of 100 < E < 400 keV at large L (L > 5), which are less significant during quiet times and extend from dusk to dawn sector through midnight during substorms. The different PADs near the equator provide clues of the underlying physical processes that produce the dynamics of these different populations.
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Experimental validation of a transformation optics based lens for beam steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianjia; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr; Lustrac, André de
2015-10-12
A transformation optics based lens for beam control is experimentally realized and measured at microwave frequencies. Laplace's equation is adopted to construct the mapping between the virtual and physical spaces. The metamaterial-based lens prototype is designed using electric LC resonators. A planar microstrip antenna source is used as transverse electric polarized wave launcher for the lens. Both the far field radiation patterns and the near-field distributions have been measured to experimentally demonstrate the beam steering properties. Measurements agree quantitatively and qualitatively with numerical simulations, and a non-narrow frequency bandwidth operation is observed.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk
2018-04-01
The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.
Van Tuyckom, Charlotte; Van de Velde, Sarah; Bracke, Piet
2013-06-01
It is well known that European women are less physically active in their leisure time than European men. Attempts to explain this gender difference often do not succeed in raising the problem above the individual level. However, the size of the disadvantage for women varies considerably across countries, proving that leisure time physical (in)activity takes place in a broader societal context and must also be approached as such. In this sense, some authors have explained women's lack of leisure time physical activity in terms of gendered power relations in society. Therefore, the present article postulates that over and above the individual effect of gender, there is an additional impact of a society's gender-based (in)equality distribution. By means of the 2005 Eurobarometer survey (comprising 25,745 adults from 27 European countries), gender differences in leisure time physical inactivity (LTPI) were analysed by means of multilevel logistic regression analysis. National gender-based (in)equality was measured by the Gender Empowerment Measure and the Gender Gap Index. Controlled for compositional effects, gender differences in LTPI varied as a function of gender-related characteristics at the macro-level. In particular, in countries characterized by high levels of gender-based equality, LTPI differences between men and women even disappeared. The findings underscore the need to adopt a society-level approach and to incorporate socio-contextual factors in the study of gender disparities in LTPI.
The evolving energy budget of accretionary wedges
NASA Astrophysics Data System (ADS)
McBeck, Jessica; Cooke, Michele; Maillot, Bertrand; Souloumiac, Pauline
2017-04-01
The energy budget of evolving accretionary systems reveals how deformational processes partition energy as faults slip, topography uplifts, and layer-parallel shortening produces distributed off-fault deformation. The energy budget provides a quantitative framework for evaluating the energetic contribution or consumption of diverse deformation mechanisms. We investigate energy partitioning in evolving accretionary prisms by synthesizing data from physical sand accretion experiments and numerical accretion simulations. We incorporate incremental strain fields and cumulative force measurements from two suites of experiments to design numerical simulations that represent accretionary wedges with stronger and weaker detachment faults. One suite of the physical experiments includes a basal glass bead layer and the other does not. Two physical experiments within each suite implement different boundary conditions (stable base versus moving base configuration). Synthesizing observations from the differing base configurations reduces the influence of sidewall friction because the force vector produced by sidewall friction points in opposite directions depending on whether the base is fixed or moving. With the numerical simulations, we calculate the energy budget at two stages of accretion: at the maximum force preceding the development of the first thrust pair, and at the minimum force following the development of the pair. To identify the appropriate combination of material and fault properties to apply in the simulations, we systematically vary the Young's modulus and the fault static and dynamic friction coefficients in numerical accretion simulations, and identify the set of parameters that minimizes the misfit between the normal force measured on the physical backwall and the numerically simulated force. Following this derivation of the appropriate material and fault properties, we calculate the components of the work budget in the numerical simulations and in the simulated increments of the physical experiments. The work budget components of the physical experiments are determined from backwall force measurements and incremental velocity fields calculated via digital image correlation. Comparison of the energy budget preceding and following the development of the first thrust pair quantifies the tradeoff of work done in distributed deformation and work expended in frictional slip due to the development of the first backthrust and forethrust. In both the numerical and physical experiments, after the pair develops internal work decreases at the expense of frictional work, which increases. Despite the increase in frictional work, the total external work of the system decreases, revealing that accretion faulting leads to gains in efficiency. Comparison of the energy budget of the accretion experiments and simulations with the strong and weak detachments indicate that when the detachment is strong, the total energy consumed in frictional sliding and internal deformation is larger than when the detachment is relatively weak.
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
Ge Sun; Jianbiao Lu; Steven G. McNulty; James M. Vose; Devendra M. Amayta
2006-01-01
A clear understanding of the basic hydrologic processes is needed to restore and manage watersheds across the diverse physiologic gradients in the Southeastern U.S. We evaluated a physically based, spatially distributed watershed hydrologic model called MIKE SHE/MIKE 11 to evaluate disturbance impacts on water use and yield across the region. Long-term forest...
Zhaohua Dai; Carl Trettin; Changsheng Li; Devendra M. Amatya; Ge Sun; Harbin Li
2010-01-01
A physically based distributed hydrological model, MIKE SHE, was used to evaluate the effects of altered temperature and precipitation regimes on the streamflow and water table in a forested watershed on the southeastern Atlantic coastal plain. The model calibration and validation against both streamflow and water table depth showed that the MIKE SHE was applicable for...
NASA Astrophysics Data System (ADS)
Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc
2004-09-01
High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.
NASA Astrophysics Data System (ADS)
Cai, Gaochao; Vanderborght, Jan; Langensiepen, Matthias; Schnepf, Andrea; Hüging, Hubert; Vereecken, Harry
2018-04-01
How much water can be taken up by roots and how this depends on the root and water distributions in the root zone are important questions that need to be answered to describe water fluxes in the soil-plant-atmosphere system. Physically based root water uptake (RWU) models that relate RWU to transpiration, root density, and water potential distributions have been developed but used or tested far less. This study aims at evaluating the simulated RWU of winter wheat using the empirical Feddes-Jarvis (FJ) model and the physically based Couvreur (C) model for different soil water conditions and soil textures compared to sap flow measurements. Soil water content (SWC), water potential, and root development were monitored noninvasively at six soil depths in two rhizotron facilities that were constructed in two soil textures: stony vs. silty, with each of three water treatments: sheltered, rainfed, and irrigated. Soil and root parameters of the two models were derived from inverse modeling and simulated RWU was compared with sap flow measurements for validation. The different soil types and water treatments resulted in different crop biomass, root densities, and root distributions with depth. The two models simulated the lowest RWU in the sheltered plot of the stony soil where RWU was also lower than the potential RWU. In the silty soil, simulated RWU was equal to the potential uptake for all treatments. The variation of simulated RWU among the different plots agreed well with measured sap flow but the C model predicted the ratios of the transpiration fluxes in the two soil types slightly better than the FJ model. The root hydraulic parameters of the C model could be constrained by the field data but not the water stress parameters of the FJ model. This was attributed to differences in root densities between the different soils and treatments which are accounted for by the C model, whereas the FJ model only considers normalized root densities. The impact of differences in root density on RWU could be accounted for directly by the physically based RWU model but not by empirical models that use normalized root density functions.
Gaikwad, Ravi M; Dokukin, Maxim E; Iyer, K Swaminathan; Woodworth, Craig D; Volkov, Dmytro O; Sokolov, Igor
2011-04-07
Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical adhesion between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. © The Royal Society of Chemistry 2011
Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2005-01-01
A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.