Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Large eddy simulation modeling of particle-laden flows in complex terrain
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.
2017-12-01
The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.
A new method for wind speed forecasting based on copula theory.
Wang, Yuankun; Ma, Huiqun; Wang, Dong; Wang, Guizuo; Wu, Jichun; Bian, Jinyu; Liu, Jiufu
2018-01-01
How to determine representative wind speed is crucial in wind resource assessment. Accurate wind resource assessments are important to wind farms development. Linear regressions are usually used to obtain the representative wind speed. However, terrain flexibility of wind farm and long distance between wind speed sites often lead to low correlation. In this study, copula method is used to determine the representative year's wind speed in wind farm by interpreting the interaction of the local wind farm and the meteorological station. The result shows that the method proposed here can not only determine the relationship between the local anemometric tower and nearby meteorological station through Kendall's tau, but also determine the joint distribution without assuming the variables to be independent. Moreover, the representative wind data can be obtained by the conditional distribution much more reasonably. We hope this study could provide scientific reference for accurate wind resource assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Gonzalez-Nicolas, Ana; Illangasekare, Tissa
2017-01-01
Incorporating hysteresis into models is important to accurately capture the two phase flow behavior when porous media systems undergo cycles of drainage and imbibition such as in the cases of injection and post-injection redistribution of CO2 during geological CO2 storage (GCS). In the traditional model of two-phase flow, existing constitutive models that parameterize the hysteresis associated with these processes are generally based on the empirical relationships. This manuscript presents development and testing of mathematical hysteretic capillary pressure—saturation—relative permeability models with the objective of more accurately representing the redistribution of the fluids after injection. The constitutive models are developed by relating macroscopic variables to basic physics of two-phase capillary displacements at pore-scale and void space distribution properties. The modeling approach with the developed constitutive models with and without hysteresis as input is tested against some intermediate-scale flow cell experiments to test the ability of the models to represent movement and capillary trapping of immiscible fluids under macroscopically homogeneous and heterogeneous conditions. The hysteretic two-phase flow model predicted the overall plume migration and distribution during and post injection reasonably well and represented the postinjection behavior of the plume more accurately than the nonhysteretic models. Based on the results in this study, neglecting hysteresis in the constitutive models of the traditional two-phase flow theory can seriously overpredict or underpredict the injected fluid distribution during post-injection under both homogeneous and heterogeneous conditions, depending on the selected value of the residual saturation in the nonhysteretic models.
An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; Aumiller, D.L.
1999-07-01
The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.
A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow
NASA Technical Reports Server (NTRS)
Loh, C. Y.; Hui, W. H.
1991-01-01
Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.
Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.
Sato, Masashi; Yamashita, Okito; Sato, Masa-aki
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
12 CFR 701.35 - Share, share draft, and share certificate accounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AFFECTING CREDIT UNIONS ORGANIZATION AND OPERATION OF FEDERAL CREDIT UNIONS § 701.35 Share, share draft, and share certificate accounts. (a) Federal credit unions may offer share, share draft, and share...) A Federal credit union shall accurately represent the terms and conditions of its share, share draft...
Pain assessment in animal models of osteoarthritis.
Piel, Margaret J; Kroin, Jeffrey S; van Wijnen, Andre J; Kc, Ranjan; Im, Hee-Jeong
2014-03-10
Assessment of pain in animal models of osteoarthritis is integral to interpretation of a model's utility in representing the clinical condition, and enabling accurate translational medicine. Here we describe behavioral pain assessments available for small and large experimental osteoarthritic pain animal models. Copyright © 2013 Elsevier B.V. All rights reserved.
Calculating Shocks In Flows At Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Palmer, Grant
1988-01-01
Boundary conditions prove critical. Conference paper describes algorithm for calculation of shocks in hypersonic flows of gases at chemical equilibrium. Although algorithm represents intermediate stage in development of reliable, accurate computer code for two-dimensional flow, research leading up to it contributes to understanding of what is needed to complete task.
Assessment of applications of transport models on regional scale solute transport
NASA Astrophysics Data System (ADS)
Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.
2017-12-01
Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.
A Wildfire-relevant climatology of the convective environment of the United States
Brian E. Potter; Matthew A. Anaya
2015-01-01
Convective instability can influence the behaviour of large wildfires. Because wildfires modify the temperature and moisture of air in their plumes, instability calculations using ambient conditions may not accurately represent convective potential for some fire plumes. This study used the North American Regional Reanalysis to develop a climatology of the convective...
Solar Integration National Dataset Toolkit | Grid Modernization | NREL
system with them. As system topology, operation practices, and electrics power markets evolve, system data sets (for solar, wind, and load, among others) that accurately represent system conditions. For injection into the power system at each location. Related Publications NREL Develops Sub-Hour Solar Power
The Neural Correlates of Health Risk Perception in Individuals with Low and High Numeracy
ERIC Educational Resources Information Center
Vogel, Stephan E.; Keller, Carmen; Koschutnig, Karl; Reishofer, Gernot; Ebner, Franz; Dohle, Simone; Siegrist, Michael; Grabner, Roland H.
2016-01-01
The ability to use numerical information in different contexts is a major goal of mathematics education. In health risk communication, outcomes of a medical condition are frequently expressed in probabilities. Difficulties to accurately represent probability information can result in unfavourable medical decisions. To support individuals with…
Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2016-01-01
Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.
Active machine learning-driven experimentation to determine compound effects on protein patterns.
Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F
2016-02-03
High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Franke, O. Lehn; Reilly, Thomas E.; Bennett, Gordon D.
1987-01-01
Accurate definition of boundary and initial conditions is an essential part of conceptualizing and modeling ground-water flow systems. This report describes the properties of the seven most common boundary conditions encountered in ground-water systems and discusses major aspects of their application. It also discusses the significance and specification of initial conditions and evaluates some common errors in applying this concept to ground-water-system models. An appendix is included that discusses what the solution of a differential equation represents and how the solution relates to the boundary conditions defining the specific problem. This report considers only boundary conditions that apply to saturated ground-water systems.
Internet Access and Youth of Yakutia Awareness on the Health-Promotion Factor
ERIC Educational Resources Information Center
Barakhsanova, Elizabeth Afanasyevna; Ignatyev, Vladimir Petrovich; Savvinov, Vasily Mikhaylovich; Olesova, Sargulana Gavrilievna
2016-01-01
Thematic justification is determined by the fact that in the conditions of the steady growth of mobile technology the youth accurately does not represent health promotion value when using the Internet at home, at school and other entertainment leisure recreation. With respect thereto this paper is aimed at monitoring general awareness of seniors…
Development of mapped stress-field boundary conditions based on a Hill-type muscle model.
Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A
2014-09-01
Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.
Sensing and Active Flow Control for Advanced BWB Propulsion-Airframe Integration Concepts
NASA Technical Reports Server (NTRS)
Fleming, John; Anderson, Jason; Ng, Wing; Harrison, Neal
2005-01-01
In order to realize the substantial performance benefits of serpentine boundary layer ingesting diffusers, this study investigated the use of enabling flow control methods to reduce engine-face flow distortion. Computational methods and novel flow control modeling techniques were utilized that allowed for rapid, accurate analysis of flow control geometries. Results were validated experimentally using the Techsburg Ejector-based wind tunnel facility; this facility is capable of simulating the high-altitude, high subsonic Mach number conditions representative of BWB cruise conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.
Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.
Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.; ...
2017-09-13
Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.
Numerical Modeling of Gas Turbine Combustor Utilizing One-Dimensional Acoustics
NASA Astrophysics Data System (ADS)
Caley, Thomas M.
This study focuses on the numerical modeling of a gas turbine combustor set-up with known regions of thermoacoustic instability. The proposed model takes the form of a hybrid thermoacoustic network, with lumped elements representing boundary conditions and the flame, and 3-dimensional geometry volumes representing the geometry. The model is analyzed using a commercial 3-D finite element method (FEM) software, COMSOL Multiphysics. A great deal of literature is available covering thermoacoustic modeling, but much of it utilizes more computationally expensive techniques such as Large-Eddy Simulations, or relies on analytical modeling that is limited to specific test cases or proprietary software. The present study models the 3-D geometry of a high-pressure combustion chamber accurately, and uses the lumped elements of a thermoacoustic network to represent parts of the combustor system that can be experimentally tested under stable conditions, ensuring that the recorded acoustic responses can be attributed to that element alone. The numerical model has been tested against the experimental model with and without an experimentally-determined impedance boundary condition. Eigenfrequency studies are used to compare the frequency and growth rates (and from that, the thermoacoustic stability) of resonant modes in the combustor. The flame in the combustor is modeled with a flame transfer function that was determined from experimental testing using frequency forcing. The effect of flow rate on the impedance boundary condition is also examined experimentally and numerically to qualify the practice of modeling an orifice plate as an acoustically-closed boundary. Using the experimental flame transfer function and boundary conditions in the numerical model produced results that closely matched previous experimental tests in frequency, but not in stability characteristics. The lightweight nature of the numerical model means additional lumped elements can be easily added when experimental data is available, creating a more accurate model without noticeably increasing the complexity or computational time.
Gauge Conditions for Moving Black Holes Without Excision
NASA Technical Reports Server (NTRS)
van Meter, James; Baker, John G.; Koppitz, Michael; Dae-IL, Choi
2006-01-01
Recent demonstrations of unexcised, puncture black holes traversing freely across computational grids represent a significant advance in numerical relativity. Stable an$ accurate simulations of multiple orbits, and their radiated waves, result. This capability is critically undergirded by a careful choice of gauge. Here we present analytic considerations which suggest certain gauge choices, and numerically demonstrate their efficacy in evolving a single moving puncture.
Active machine learning-driven experimentation to determine compound effects on protein patterns
Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F
2016-01-01
High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance. DOI: http://dx.doi.org/10.7554/eLife.10047.001 PMID:26840049
Anismus: the cause of constipation? Results of investigation and treatment.
Duthie, G S; Bartolo, D C
1992-01-01
Anismus, or failure of the somatic sphincter apparatus to relax at defecation, has been implicated as a major contributor to the problem of obstructed defecation. Current diagnostic methods depend on laboratory measurements of attempted defecation and the most complex, dynamic proctography has been the mainstay of diagnosis. Using a new computerized ambulatory method of recording sphincter function in these patients at home, we report an 80% reduction in our diagnostic rate suggesting that conventional tests fail to accurately diagnose this condition, probably because they poorly represent the natural physiology of defecation. Treatment of this distressing condition is more complex and a variety of surgical and pharmacological measures have failed. Biofeedback retraining of anorectal function of these patients has been very successful and represents the management of choice.
A hydroelastic model of hydrocephalus
NASA Astrophysics Data System (ADS)
Smillie, Alan; Sobey, Ian; Molnar, Zoltan
2005-09-01
We combine elements of poroelasticity and of fluid mechanics to construct a mathematical model of the human brain and ventricular system. The model is used to study hydrocephalus, a pathological condition in which the normal flow of the cerebrospinal fluid is disturbed, causing the brain to become deformed. Our model extends recent work in this area by including flow through the aqueduct, by incorporating boundary conditions that we believe accurately represent the anatomy of the brain and by including time dependence. This enables us to construct a quantitative model of the onset, development and treatment of this condition. We formulate and solve the governing equations and boundary conditions for this model and give results that are relevant to clinical observations.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potiron, A.; Gerometta, C.; Plun, J.M.
Simulation of casting processes is now industrially available with different softwares proposed to foundrymen. Yet, it is always difficult to provide the boundary conditions as correct as possible to represent accurately the environment of the mould. The knowledge of heat transfer coefficient used to modelize the cooling devices in permanent moulds is very important, as well as the acquisition of accurate data regarding die coatings or physical properties. After having conducted a sample survey with French foundries, the experiment conditions have been defined. Two main types of cooling device have been studied: water running in a pipe and air flowingmore » in a special shape to provide localized cooling. Some of the heat transfer coefficients have been simply calculated using Colburn`s law, others have been determined using a 1D or 2D inverse method. Auto-validation results obtained on the experimental device simulated with SIMULOR, a 3D finite volume software, are encouraging.« less
Quantification of soil mapping by digital analysis of LANDSAT data. [Clinton County, Indiana
NASA Technical Reports Server (NTRS)
Kirschner, F. R.; Kaminsky, S. A.; Hinzel, E. J.; Sinclair, H. R.; Weismiller, R. A.
1977-01-01
Soil survey mapping units are designed such that the dominant soil represents the major proportion of the unit. At times, soil mapping delineations do not adequately represent conditions as stated in the mapping unit descriptions. Digital analysis of LANDSAT multispectral scanner (MSS) data provides a means of accurately describing and quantifying soil mapping unit composition. Digital analysis of LANDSAT MSS data collected on 9 June 1973 was used to prepare a spectral soil map for a 430-hectare area in Clinton County, Indiana. Fifteen spectral classes were defined, representing 12 soil and 3 vegetation classes. The 12 soil classes were grouped into 4 moisture regimes based upon their spectral responses; the 3 vegetation classes were grouped into one all-inclusive class.
Statistical Compression for Climate Model Output
NASA Astrophysics Data System (ADS)
Hammerling, D.; Guinness, J.; Soh, Y. J.
2017-12-01
Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.
NASA Astrophysics Data System (ADS)
Lazzi Gazzini, S.; Schädler, R.; Kalfas, A. I.; Abhari, R. S.
2017-02-01
It is technically challenging to measure heat fluxes on the rotating components of gas turbines, yet accurate knowledge of local heat loads under engine-representative conditions is crucial for ensuring the reliability of the designs. In this work, quantitative image processing tools were developed to perform fast and accurate infrared thermography measurements on 3D-shaped film-heaters directly deposited on the turbine endwalls. The newly developed image processing method and instrumentation were used to measure the heat load on the rotor endwalls of an axial turbine. A step-transient heat flux calibration technique is applied to measure the heat flux generated locally by the film heater, thus eliminating the need for a rigorously iso-energetic boundary condition. On-board electronics installed on the rotor record the temperature readings of RTDs installed in the substrate below the heaters in order to evaluate the conductive losses in the solid. Full maps of heat transfer coefficient and adiabatic wall temperature are produced for two different operating conditions, demonstrating the sensitivity of the technique to local flow features and variations in heat transfer due to Reynolds number effect.
The applicability and effectiveness of cluster analysis
NASA Technical Reports Server (NTRS)
Ingram, D. S.; Actkinson, A. L.
1973-01-01
An insight into the characteristics which determine the performance of a clustering algorithm is presented. In order for the techniques which are examined to accurately cluster data, two conditions must be simultaneously satisfied. First the data must have a particular structure, and second the parameters chosen for the clustering algorithm must be correct. By examining the structure of the data from the Cl flight line, it is clear that no single set of parameters can be used to accurately cluster all the different crops. The effectiveness of either a noniterative or iterative clustering algorithm to accurately cluster data representative of the Cl flight line is questionable. Thus extensive a prior knowledge is required in order to use cluster analysis in its present form for applications like assisting in the definition of field boundaries and evaluating the homogeneity of a field. New or modified techniques are necessary for clustering to be a reliable tool.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses
Myers, Risa B.; Herskovic, Jorge R.
2011-01-01
Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292
Earliest evidence for arthrogryposis multiplex congenita or Larsen syndrome?
Anderson, T
1997-08-08
A sixteenth-century illustrated pamphlet from Great Britain suggests that documentary evidence may permit accurate diagnosis of pathological conditions in earlier societies. The document is of particular importance, since the presented congenital abnormalities, including cleft lip, spina bifida cystica, genu recurvatum, and talipes deformity are reported rarely in archaeological skeletal material. It is suggested that the combination of abnormalities may represent the earliest case of arthrogryposis multiplex congenita or Larsen syndrome.
Tribology experiment. [journal bearings and liquid lubricants
NASA Technical Reports Server (NTRS)
Wall, W. A.
1981-01-01
A two-dimensional concept for Spacelab rack 7 was developed to study the interaction of liquid lubricants and surfaces under static and dynamic conditions in a low-gravity environment fluid wetting and spreading experiments of a journal bearing experiments, and means to accurately measure and record the low-gravity environment during experimentation are planned. The wetting and spreading process of selected commercial lubricants on representative surface are to the observes in a near-zero gravity environment.
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-01-01
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
The Voronoi Implicit Interface Method for computing multiphase physics.
Saye, Robert I; Sethian, James A
2011-12-06
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-11-21
In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less
Representing vision and blindness.
Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D
2016-01-01
There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be implemented in many domain-specific ontologies.
Hydrologic analysis of the Rio Grande Basin north of Embudo, New Mexico; Colorado and New Mexico
Hearne, G.A.; Dewey, J.D.
1988-01-01
Water yield was estimated for each of the five regions that represent contrasting hydrologic regimes in the 10,400 square miles of the Rio Grande basin above Embudo, New Mexico. Water yield was estimated as 2,800 cubic feet per second for the San Juan Mountains, and 28 cubic feet per second for the Taos Plateau. Evapotranspiration exceeded precipitation by 150 cubic feet per second on the Costilla Plains and 2,400 cubic feet per second on the Alamosa Basin. A three-dimensional model was constructed to represent the aquifer system in the Alamosa Basin. A preliminary analysis concluded that: (1) a seven-layer model representing 3,200 feet of saturated thickness could accurately simulate the behavior of the flow equation; and (2) the 1950 condition was approximately stable and would be a satisfactory initial condition. Reasonable modifications to groundwater withdrawals simulated 1950-79 water-level declines close to measured value. Sensitivity tests indicated that evapotranspiration salvage was the major source, 69 to 82 percent, of groundwater withdrawals. Evapotranspiration salvage was projected to be the source of most withdrawals. (USGS)
Verification of a ground-based method for simulating high-altitude, supersonic flight conditions
NASA Astrophysics Data System (ADS)
Zhou, Xuewen; Xu, Jian; Lv, Shuiyan
Ground-based methods for accurately representing high-altitude, high-speed flight conditions have been an important research topic in the aerospace field. Based on an analysis of the requirements for high-altitude supersonic flight tests, a ground-based test bed was designed combining Laval nozzle, which is often found in wind tunnels, with a rocket sled system. Sled tests were used to verify the performance of the test bed. The test results indicated that the test bed produced a uniform-flow field with a static pressure and density equivalent to atmospheric conditions at an altitude of 13-15km and at a flow velocity of approximately M 2.4. This test method has the advantages of accuracy, fewer experimental limitations, and reusability.
Coltharp, Carla; Kessler, Rene P.; Xiao, Jie
2012-01-01
Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Warren, J.; Guha, A.
2017-12-01
While carbon and energy fluxes in current Earth system models generally have reasonable instantaneous responses to extreme temperature and precipitation events, they often do not adequately represent the long-term impacts of these events. For example, simulated net primary productivity (NPP) may decrease during an extreme heat wave or drought, but may recover rapidly to pre-event levels following the conclusion of the extreme event. However, field measurements indicate that long-lasting damage to leaves and other plant components often occur, potentially affecting the carbon and energy balance for months after the extreme event. The duration and frequency of such extreme conditions is likely to shift in the future, and therefore it is critical for Earth system models to better represent these processes for more accurate predictions of future vegetation productivity and land-atmosphere feedbacks. Here we modify the structure of the Accelerated Climate Model for Energy (ACME) land surface model to represent long-term impacts and test the improved model against observations from experiments that applied extreme conditions in growth chambers. Additionally, we test the model against eddy covariance measurements that followed extreme conditions at selected locations in North America, and against satellite-measured vegetation indices following regional extreme events.
NASA Astrophysics Data System (ADS)
Soltanzadeh, Iman; Bonnardot, Valérie; Sturman, Andrew; Quénol, Hervé; Zawar-Reza, Peyman
2017-08-01
Global warming has implications for thermal stress for grapevines during ripening, so that wine producers need to adapt their viticultural practices to ensure optimum physiological response to environmental conditions in order to maintain wine quality. The aim of this paper is to assess the ability of the Weather Research and Forecasting (WRF) model to accurately represent atmospheric processes at high resolution (500 m) during two events during the grapevine ripening period in the Stellenbosch Wine of Origin district of South Africa. Two case studies were selected to identify areas of potentially high daytime heat stress when grapevine photosynthesis and grape composition were expected to be affected. The results of high-resolution atmospheric model simulations were compared to observations obtained from an automatic weather station (AWS) network in the vineyard region. Statistical analysis was performed to assess the ability of the WRF model to reproduce spatial and temporal variations of meteorological parameters at 500-m resolution. The model represented the spatial and temporal variation of meteorological variables very well, with an average model air temperature bias of 0.1 °C, while that for relative humidity was -5.0 % and that for wind speed 0.6 m s-1. Variation in model performance varied between AWS and with time of day, as WRF was not always able to accurately represent effects of nocturnal cooling within the complex terrain. Variations in performance between the two case studies resulted from effects of atmospheric boundary layer processes in complex terrain under the influence of the different synoptic conditions prevailing during the two periods.
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Krishnamurthy, Thiagarajan; Ross, Richard W.
2016-01-01
The development of multidisciplinary Integrated Vehicle Health Management (IVHM) tools will enable accurate detection, diagnosis and prognosis of damage under normal and adverse conditions during flight. The adverse conditions include loss of control caused by environmental factors, actuator and sensor faults or failures, and structural damage conditions. A major concern is the growth of undetected damage/cracks due to fatigue and low velocity foreign object impact that can reach a critical size during flight, resulting in loss of control of the aircraft. To avoid unstable catastrophic propagation of damage during a flight, load levels must be maintained that are below the load-carrying capacity for damaged aircraft structures. Hence, a capability is needed for accurate real-time predictions of safe load carrying capacity for aircraft structures with complex damage configurations. In the present work, a procedure is developed that uses guided wave responses to interrogate damage. As the guided wave interacts with damage, the signal attenuates in some directions and reflects in others. This results in a difference in signal magnitude as well as phase shifts between signal responses for damaged and undamaged structures. Accurate estimation of damage size and location is made by evaluating the cumulative signal responses at various pre-selected sensor locations using a genetic algorithm (GA) based optimization procedure. The damage size and location is obtained by minimizing the difference between the reference responses and the responses obtained by wave propagation finite element analysis of different representative cracks, geometries and sizes.
DREAM-3D and the importance of model inputs and boundary conditions
NASA Astrophysics Data System (ADS)
Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue
2015-04-01
Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.
NASA Technical Reports Server (NTRS)
Gokoglu, S. A.; Rosner, D. E.
1984-01-01
Modification of the code STAN5 to properly include thermophoretic mass transport, and examination of selected test cases developing boundary layers which include variable properties, viscous dissipation, transition to turbulence and transpiration cooling. Under conditions representative of current and projected GT operation, local application of St(M)/St(M),o correlations evidently provides accurate and economical engineering design predictions, especially for suspended particles characterized by Schmidt numbers outside of the heavy vapor range.
Toward Robust and Efficient Climate Downscaling for Wind Energy
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Rife, D.; Pinto, J. O.; Monaghan, A. J.; Davis, C. A.
2011-12-01
This presentation describes a more accurate and economical (less time, money and effort) wind resource assessment technique for the renewable energy industry, that incorporates innovative statistical techniques and new global mesoscale reanalyzes. The technique judiciously selects a collection of "case days" that accurately represent the full range of wind conditions observed at a given site over a 10-year period, in order to estimate the long-term energy yield. We will demonstrate that this new technique provides a very accurate and statistically reliable estimate of the 10-year record of the wind resource by intelligently choosing a sample of ±120 case days. This means that the expense of downscaling to quantify the wind resource at a prospective wind farm can be cut by two thirds from the current industry practice of downscaling a randomly chosen 365-day sample to represent winds over a "typical" year. This new estimate of the long-term energy yield at a prospective wind farm also has far less statistical uncertainty than the current industry standard approach. This key finding has the potential to reduce significantly market barriers to both onshore and offshore wind farm development, since insurers and financiers charge prohibitive premiums on investments that are deemed to be high risk. Lower uncertainty directly translates to lower perceived risk, and therefore far more attractive financing terms could be offered to wind farm developers who employ this new technique.
Acute Perforated Diverticulitis: Assessment With Multidetector Computed Tomography.
Sessa, Barbara; Galluzzo, Michele; Ianniello, Stefania; Pinto, Antonio; Trinci, Margherita; Miele, Vittorio
2016-02-01
Colonic diverticulitis is a common condition in the western population. Complicated diverticulitis is defined as the presence of extraluminal air or abscess, peritonitis, colon occlusion, or fistulas. Multidetector row computed tomography (MDCT) is the modality of choice for the diagnosis and the staging of diverticulitis and its complications, which enables performing an accurate differential diagnosis and addressing the patients to a correct management. MDCT is accurate in diagnosing the site of perforation in approximately 85% of cases, by the detection of direct signs (focal bowel wall discontinuity, extraluminal gas, and extraluminal enteric contrast) and indirect signs, which are represented by segmental bowel wall thickening, abnormal bowel wall enhancement, perivisceral fat stranding of fluid, and abscess. MDCT is accurate in the differentiation from complicated colon diverticulitis and colon cancer, often with a similar imaging. The computed tomography-guided classification is recommended to discriminate patients with mild diverticulitis, generally treated with antibiotics, from those with severe diverticulitis with a large abscess, which may be drained with a percutaneous approach. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimal plant nitrogen use improves model representation of vegetation response to elevated CO2
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Kern, Melanie; Engel, Jan; Zaehle, Sönke
2017-04-01
Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
Optimal Loading for Maximizing Power During Sled-Resisted Sprinting.
Cross, Matt R; Brughelli, Matt; Samozino, Pierre; Brown, Scott R; Morin, Jean-Benoit
2017-09-01
To ascertain whether force-velocity-power relationships could be compiled from a battery of sled-resisted overground sprints and to clarify and compare the optimal loading conditions for maximizing power production for different athlete cohorts. Recreational mixed-sport athletes (n = 12) and sprinters (n = 15) performed multiple trials of maximal sprints unloaded and towing a selection of sled masses (20-120% body mass [BM]). Velocity data were collected by sports radar, and kinetics at peak velocity were quantified using friction coefficients and aerodynamic drag. Individual force-velocity and power-velocity relationships were generated using linear and quadratic relationships, respectively. Mechanical and optimal loading variables were subsequently calculated and test-retest reliability assessed. Individual force-velocity and power-velocity relationships were accurately fitted with regression models (R 2 > .977, P < .001) and were reliable (ES = 0.05-0.50, ICC = .73-.97, CV = 1.0-5.4%). The normal loading that maximized peak power was 78% ± 6% and 82% ± 8% of BM, representing a resistance of 3.37 and 3.62 N/kg at 4.19 ± 0.19 and 4.90 ± 0.18 m/s (recreational athletes and sprinters, respectively). Optimal force and normal load did not clearly differentiate between cohorts, although sprinters developed greater maximal power (17.2-26.5%, ES = 0.97-2.13, P < .02) at much greater velocities (16.9%, ES = 3.73, P < .001). Mechanical relationships can be accurately profiled using common sled-training equipment. Notably, the optimal loading conditions determined in this study (69-96% of BM, dependent on friction conditions) represent much greater resistance than current guidelines (~7-20% of BM). This method has potential value in quantifying individualized training parameters for optimized development of horizontal power.
NASA Astrophysics Data System (ADS)
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
HIPPI: highly accurate protein family classification with ensembles of HMMs.
Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy
2016-11-11
Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
Improved Classification of Mammograms Following Idealized Training
Hornsby, Adam N.; Love, Bradley C.
2014-01-01
People often make decisions by stochastically retrieving a small set of relevant memories. This limited retrieval implies that human performance can be improved by training on idealized category distributions (Giguère & Love, 2013). Here, we evaluate whether the benefits of idealized training extend to categorization of real-world stimuli, namely classifying mammograms as normal or tumorous. Participants in the idealized condition were trained exclusively on items that, according to a norming study, were relatively unambiguous. Participants in the actual condition were trained on a representative range of items. Despite being exclusively trained on easy items, idealized-condition participants were more accurate than those in the actual condition when tested on a range of item types. However, idealized participants experienced difficulties when test items were very dissimilar from training cases. The benefits of idealization, attributable to reducing noise arising from cognitive limitations in memory retrieval, suggest ways to improve real-world decision making. PMID:24955325
Improved Classification of Mammograms Following Idealized Training.
Hornsby, Adam N; Love, Bradley C
2014-06-01
People often make decisions by stochastically retrieving a small set of relevant memories. This limited retrieval implies that human performance can be improved by training on idealized category distributions (Giguère & Love, 2013). Here, we evaluate whether the benefits of idealized training extend to categorization of real-world stimuli, namely classifying mammograms as normal or tumorous. Participants in the idealized condition were trained exclusively on items that, according to a norming study, were relatively unambiguous. Participants in the actual condition were trained on a representative range of items. Despite being exclusively trained on easy items, idealized-condition participants were more accurate than those in the actual condition when tested on a range of item types. However, idealized participants experienced difficulties when test items were very dissimilar from training cases. The benefits of idealization, attributable to reducing noise arising from cognitive limitations in memory retrieval, suggest ways to improve real-world decision making.
Atmospheric densities derived from CHAMP/STAR accelerometer observations
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Tamagnan, D.; Biancale, R.
2004-03-01
The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.
Exploiting Continuous Scanning Laser Doppler Vibrometry in timing belt dynamic characterisation
NASA Astrophysics Data System (ADS)
Chiariotti, P.; Martarelli, M.; Castellini, P.
2017-03-01
Dynamic behaviour of timing belts has always interested the engineering community over the years. Nowadays, there are several numerical methods to predict the dynamics of these systems. However, the tuning of such models by experimental approaches still represents an issue: an accurate characterisation does require a measurement in operating conditions since the belt mounting condition might severely affect its dynamic behaviour. Moreover, since the belt is constantly moving during running conditions, non-contact measurement methods are needed. Laser Doppler Vibrometry (LDV) and imaging techniques do represent valid candidates for this purpose. This paper aims at describing the use of Continuous Scanning LDV (CSLDV) as a tool for the dynamic characterisation of timing belts in IC (Internal Combustion) engines (cylinder head). The high-spatial resolution data that can be collected in short testing time makes CSLDV highly suitable for such application. The measurement on a moving surface, however, represents a challenge for CSLDV. The paper discusses how the belt in-plane speed influences CSLDV signal and how an order-based multi-harmonic excitation might affect the recovery of Operational Deflection Shapes in a CSLDV test. A comparison with a standard Discrete Scanning LDV measurement is also given in order to show that a CSLDV test, if well designed, can indeed provide the same amount of information in a drastically reduced amount of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...
2016-07-02
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten
2016-09-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.
Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten
2016-01-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244
Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits
NASA Technical Reports Server (NTRS)
Kopasakis, George
2015-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits
NASA Technical Reports Server (NTRS)
Kopasakis, George
2010-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Validation of WRF forecasts for the Chajnantor region
NASA Astrophysics Data System (ADS)
Pozo, Diana; Marín, J. C.; Illanes, L.; Curé, M.; Rabanus, D.
2016-06-01
This study assesses the performance of the Weather Research and Forecasting (WRF) model to represent the near-surface weather conditions and the precipitable water vapour (PWV) in the Chajnantor plateau, in the north of Chile, from 2007 April to December. The WRF model shows a very good performance forecasting the near-surface temperature and zonal wind component, although it overestimates the 2 m water vapour mixing ratio and underestimates the 10 m meridional wind component. The model represents very well the seasonal, intraseasonal and the diurnal variation of PWV. However, the PWV errors increase after the 12 h of simulation. Errors in the simulations are larger than 1.5 mm only during 10 per cent of the study period, they do not exceed 0.5 mm during 65 per cent of the time and they are below 0.25 mm more than 45 per cent of the time, which emphasizes the good performance of the model to forecast the PWV over the region. The misrepresentation of the near-surface humidity in the region by the WRF model may have a negative impact on the PWV forecasts. Thus, having accurate forecasts of humidity near the surface may result in more accurate PWV forecasts. Overall, results from this, as well as recent studies, supports the use of the WRF model to provide accurate weather forecasts for the region, particularly for the PWV, which can be of great benefit for astronomers in the planning of their scientific operations and observing time.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
On the Daubechies-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.
Sediment acoustic index method for computing continuous suspended-sediment concentrations
Landers, Mark N.; Straub, Timothy D.; Wood, Molly S.; Domanski, Marian M.
2016-07-11
Once developed, sediment acoustic index ratings must be validated with additional suspended-sediment samples, beyond the period of record used in the rating development, to verify that the regression model continues to adequately represent sediment conditions within the stream. Changes in ADVM configuration or installation, or replacement with another ADVM, may require development of a new rating. The best practices described in this report can be used to develop continuous estimates of suspended-sediment concentration and load using sediment acoustic surrogates to enable more informed and accurate responses to diverse sedimentation issues.
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Friedman, Rohn; Keshavan, Matcheri
2014-01-01
Background Patient retrospective recollection is a mainstay of assessing symptoms in mental health and psychiatry. However, evidence suggests that these retrospective recollections may not be as accurate as data collection though the experience sampling method (ESM), which captures patient data in “real time” and “real life.” However, the difficulties in practical implementation of ESM data collection have limited its impact in psychiatry and mental health. Smartphones with the capability to run mobile applications may offer a novel method of collecting ESM data that may represent a practical and feasible tool for mental health and psychiatry. Objective This paper aims to provide data on psychiatric patients’ prevalence of smartphone ownership, patterns of use, and interest in utilizing mobile applications to monitor their mental health conditions. Methods One hundred psychiatric outpatients at a large urban teaching hospital completed a paper-and-pencil survey regarding smartphone ownership, use, and interest in utilizing mobile applications to monitor their mental health condition. Results Ninety-seven percent of patients reported owning a phone and 72% reported that their phone was a smartphone. Patients in all age groups indicated greater than 50% interest in using a mobile application on a daily basis to monitor their mental health condition. Conclusions Smartphone and mobile applications represent a practical opportunity to explore new modalities of monitoring, treatment, and research of psychiatric and mental health conditions. PMID:25098314
Torous, John; Friedman, Rohn; Keshavan, Matcheri
2014-01-21
Patient retrospective recollection is a mainstay of assessing symptoms in mental health and psychiatry. However, evidence suggests that these retrospective recollections may not be as accurate as data collection though the experience sampling method (ESM), which captures patient data in "real time" and "real life." However, the difficulties in practical implementation of ESM data collection have limited its impact in psychiatry and mental health. Smartphones with the capability to run mobile applications may offer a novel method of collecting ESM data that may represent a practical and feasible tool for mental health and psychiatry. This paper aims to provide data on psychiatric patients' prevalence of smartphone ownership, patterns of use, and interest in utilizing mobile applications to monitor their mental health conditions. One hundred psychiatric outpatients at a large urban teaching hospital completed a paper-and-pencil survey regarding smartphone ownership, use, and interest in utilizing mobile applications to monitor their mental health condition. Ninety-seven percent of patients reported owning a phone and 72% reported that their phone was a smartphone. Patients in all age groups indicated greater than 50% interest in using a mobile application on a daily basis to monitor their mental health condition. Smartphone and mobile applications represent a practical opportunity to explore new modalities of monitoring, treatment, and research of psychiatric and mental health conditions.
A model of cause—effect relations in the study of behavior
Chisholm, Drake C.; Cook, Donald A.
1995-01-01
A three-phase model useful in teaching the analysis of behavior is presented. The model employs a “black box” behavior inventory diagram (BID), with a single output arrow representing behavior and three input arrows representing stimulus field, reversible states, and conditioning history. The first BID describes the organism at Time 1, and the second describes it at Time 2. Separating the two inventory diagrams is a column for the description of the intervening procedure. The model is used as a one-page handout, and students fill in the corresponding empty areas on the sheet as they solve five types of application problems. Instructors can use the BID to shape successive approximations in the accurate use of behavior-analytic vocabulary, conceptual analysis, and applications of behavior-change strategies. PMID:22478209
12 CFR 210.12 - Return of cash items and handling of returned checks.
Code of Federal Regulations, 2012 CFR
2012-01-01
... FEDWIRE (REGULATION J) Collection of Checks and Other Items By Federal Reserve Banks § 210.12 Return of...— (i) The electronic image portion of the item accurately represents all of the information on the... electronic image portion of the item accurately represents all of the information on the front and back of...
12 CFR 210.12 - Return of cash items and handling of returned checks.
Code of Federal Regulations, 2014 CFR
2014-01-01
... FEDWIRE (REGULATION J) Collection of Checks and Other Items By Federal Reserve Banks § 210.12 Return of...— (i) The electronic image portion of the item accurately represents all of the information on the... electronic image portion of the item accurately represents all of the information on the front and back of...
12 CFR 210.12 - Return of cash items and handling of returned checks.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FEDWIRE (REGULATION J) Collection of Checks and Other Items By Federal Reserve Banks § 210.12 Return of...— (i) The electronic image portion of the item accurately represents all of the information on the... electronic image portion of the item accurately represents all of the information on the front and back of...
12 CFR 210.12 - Return of cash items and handling of returned checks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FEDWIRE (REGULATION J) Collection of Checks and Other Items By Federal Reserve Banks § 210.12 Return of...— (i) The electronic image portion of the item accurately represents all of the information on the... electronic image portion of the item accurately represents all of the information on the front and back of...
12 CFR 210.12 - Return of cash items and handling of returned checks.
Code of Federal Regulations, 2013 CFR
2013-01-01
... FEDWIRE (REGULATION J) Collection of Checks and Other Items By Federal Reserve Banks § 210.12 Return of...— (i) The electronic image portion of the item accurately represents all of the information on the... electronic image portion of the item accurately represents all of the information on the front and back of...
A numerical strategy for modelling rotating stall in core compressors
NASA Astrophysics Data System (ADS)
Vahdati, M.
2007-03-01
The paper will focus on one specific core-compressor instability, rotating stall, because of the pressing industrial need to improve current design methods. The determination of the blade response during rotating stall is a difficult problem for which there is no reliable procedure. During rotating stall, the blades encounter the stall cells and the excitation depends on the number, size, exact shape and rotational speed of these cells. The long-term aim is to minimize the forced response due to rotating stall excitation by avoiding potential matches between the vibration modes and the rotating stall pattern characteristics. Accurate numerical simulations of core-compressor rotating stall phenomena require the modelling of a large number of bladerows using grids containing several tens of millions of points. The time-accurate unsteady-flow computations may need to be run for several engine revolutions for rotating stall to get initiated and many more before it is fully developed. The difficulty in rotating stall initiation arises from a lack of representation of the triggering disturbances which are inherently present in aeroengines. Since the numerical model represents a symmetric assembly, the only random mechanism for rotating stall initiation is provided by numerical round-off errors. In this work, rotating stall is initiated by introducing a small amount of geometric mistuning to the rotor blades. Another major obstacle in modelling flows near stall is the specification of appropriate upstream and downstream boundary conditions. Obtaining reliable boundary conditions for such flows can be very difficult. In the present study, the low-pressure compression (LPC) domain is placed upstream of the core compressor. With such an approach, only far field atmospheric boundary conditions are specified which are obtained from aircraft speed and altitude. A chocked variable-area nozzle, placed after the last compressor bladerow in the model, is used to impose boundary conditions downstream. Such an approach is representative of modelling an engine.Using a 3D viscous time-accurate flow representation, the front bladerows of a core compressor were modelled in a whole-annulus fashion whereas the rest of bladerows are modelled in a single-passage fashion. The rotating stall behaviour at two different compressor operating points was studied by considering two different variable-vane scheduling conditions for which experimental data were available. Using a model with nine whole-assembly models, the unsteady-flow calculations were conducted on 32-CPUs of a parallel cluster, typical run times being around 3-4 weeks for a grid with about 60 million points. The simulations were conducted over several engine rotations. As observed on the actual development engine, there was no rotating stall for the first scheduling condition while mal-scheduling of the stator vanes created a 12-band rotating stall which excited the 1st flap mode.
Critical heat flux in subcooled flow boiling
NASA Astrophysics Data System (ADS)
Hall, David Douglas
The critical heat flux (CHF) phenomenon was investigated for water flow in tubes with particular emphasis on the development of methods for predicting CHF in the subcooled flow boiling regime. The Purdue University Boiling and Two-Phase Flow Laboratory (PU-BTPFL) CHF database for water flow in a uniformly heated tube was compiled from the world literature dating back to 1949 and represents the largest CHF database ever assembled with 32,544 data points from over 100 sources. The superiority of this database was proven via a detailed examination of previous databases. The PU-BTPFL CHF database is an invaluable tool for the development of CHF correlations and mechanistic models that are superior to existing ones developed with smaller, less comprehensive CHF databases. In response to the many inaccurate and inordinately complex correlations, two nondimensional, subcooled CHF correlations were formulated, containing only five adjustable constants and whose unique functional forms were determined without using a statistical analysis but rather using the parametric trends observed in less than 10% of the subcooled CHF data. The correlation based on inlet conditions (diameter, heated length, mass velocity, pressure, inlet quality) was by far the most accurate of all known subcooled CHF correlations, having mean absolute and root-mean-square (RMS) errors of 10.3% and 14.3%, respectively. The outlet (local) conditions correlation was the most accurate correlation based on local CHF conditions (diameter, mass velocity, pressure, outlet quality) and may be used with a nonuniform axial heat flux. Both correlations proved more accurate than a recent CHF look-up table commonly employed in nuclear reactor thermal hydraulic computer codes. An interfacial lift-off, subcooled CHF model was developed from a consideration of the instability of the vapor-liquid interface and the fraction of heat required for liquid-vapor conversion as opposed to that for bulk liquid heating. Severe vapor effusion in an upstream wetting front lifts the vapor-liquid interface off the surface, triggering CHF. Since the model is entirely based on physical observations, it has the potential to accurately predict CHF for other fluids and flow geometries which are beyond the conditions for which it was validated.
Development of plant condition measurement - The Jimah Model
NASA Astrophysics Data System (ADS)
Evans, Roy F.; Syuhaimi, Mohd; Mazli, Mohammad; Kamarudin, Nurliyana; Maniza Othman, Faiz
2012-05-01
The Jimah Model is an information management model. The model has been designed to facilitate analysis of machine condition by integrating diagnostic data with quantitative and qualitative information. The model treats data as a single strand of information - metaphorically a 'genome' of data. The 'Genome' is structured to be representative of plant function and identifies the condition of selected components (or genes) in each machine. To date in industry, computer aided work processes used with traditional industrial practices, have been unable to consistently deliver a standard of information suitable for holistic evaluation of machine condition and change. Significantly the reengineered site strategies necessary for implementation of this "data genome concept" have resulted in enhanced knowledge and management of plant condition. In large plant with high initial equipment cost and subsequent high maintenance costs, accurate measurement of major component condition becomes central to whole of life management and replacement decisions. A case study following implementation of the model at a major power station site in Malaysia (Jimah) shows that modeling of plant condition and wear (in real time) can be made a practical reality.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required to represent the ejected foam. The exponents were evaluated by least squares method from experimental data. The equation is used and it can represent multiple factors in other problems as well; for example, evaluation of fatigue life, creep life, fracture toughness, and structural fracture, as well as optimization functions. The software is rather simplistic. Required inputs are initial value, final value, and an exponent for each factor. The number of factors is open-ended. The value is updated as each factor is evaluated. If a factor goes to zero, the previous value is used in the evaluation.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Communication: Adaptive boundaries in multiscale simulations
NASA Astrophysics Data System (ADS)
Wagoner, Jason A.; Pande, Vijay S.
2018-04-01
Combined-resolution simulations are an effective way to study molecular properties across a range of length and time scales. These simulations can benefit from adaptive boundaries that allow the high-resolution region to adapt (change size and/or shape) as the simulation progresses. The number of degrees of freedom required to accurately represent even a simple molecular process can vary by several orders of magnitude throughout the course of a simulation, and adaptive boundaries react to these changes to include an appropriate but not excessive amount of detail. Here, we derive the Hamiltonian and distribution function for such a molecular simulation. We also design an algorithm that can efficiently sample the boundary as a new coordinate of the system. We apply this framework to a mixed explicit/continuum simulation of a peptide in solvent. We use this example to discuss the conditions necessary for a successful implementation of adaptive boundaries that is both efficient and accurate in reproducing molecular properties.
Simultaneous detection of iodine and iodide on boron doped diamond electrodes.
Fierro, Stéphane; Comninellis, Christos; Einaga, Yasuaki
2013-01-15
Individual and simultaneous electrochemical detection of iodide and iodine has been performed via cyclic voltammetry on boron doped diamond (BDD) electrodes in a 1M NaClO(4) (pH 8) solution, representative of typical environmental water conditions. It is feasible to compute accurate calibration curve for both compounds using cyclic voltammetry measurements by determining the peak current intensities as a function of the concentration. A lower detection limit of about 20 μM was obtained for iodide and 10 μM for iodine. Based on the comparison between the peak current intensities reported during the oxidation of KI, it is probable that iodide (I(-)) is first oxidized in a single step to yield iodine (I(2)). The latter is further oxidized to obtain IO(3)(-). This technique, however, did not allow for a reasonably accurate detection of iodate (IO(3)(-)) on a BDD electrode. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jackson, S. J.; Krevor, S. C.; Agada, S.
2017-12-01
A number of studies have demonstrated the prevalent impact that small-scale rock heterogeneity can have on larger scale flow in multiphase flow systems including petroleum production and CO2sequestration. Larger scale modeling has shown that this has a significant impact on fluid flow and is possibly a significant source of inaccuracy in reservoir simulation. Yet no core analysis protocol has been developed that faithfully represents the impact of these heterogeneities on flow functions used in modeling. Relative permeability is derived from core floods performed at conditions with high flow potential in which the impact of capillary heterogeneity is voided. A more accurate representation would be obtained if measurements were made at flow conditions where the impact of capillary heterogeneity on flow is scaled to be representative of the reservoir system. This, however, is generally impractical due to laboratory constraints and the role of the orientation of the rock heterogeneity. We demonstrate a workflow of combined observations and simulations, in which the impact of capillary heterogeneity may be faithfully represented in the derivation of upscaled flow properties. Laboratory measurements that are a variation of conventional protocols are used for the parameterization of an accurate digital rock model for simulation. The relative permeability at the range of capillary numbers relevant to flow in the reservoir is derived primarily from numerical simulations of core floods that include capillary pressure heterogeneity. This allows flexibility in the orientation of the heterogeneity and in the range of flow rates considered. We demonstrate the approach in which digital rock models have been developed alongside core flood observations for three applications: (1) A Bentheimer sandstone with a simple axial heterogeneity to demonstrate the validity and limitations of the approach, (2) a set of reservoir rocks from the Captain sandstone in the UK North Sea targeted for CO2 storage, and for which the use of capillary pressure hysteresis is necessary, and (3) a secondary CO2-EOR production of residual oil from a Berea sandstone with layered heterogeneities. In all cases the incorporation of heterogeneity is shown to be key to the ultimate derivation of flow properties representative of the reservoir system.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Brennan, John
2015-06-01
Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spanner, Michael; Batista, Victor S.; Brumer, Paul
2005-02-22
The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less
Radionuclide Retention in Concrete Wasteforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovaird, Chase C.; Jansik, Danielle P.; Wellman, Dawn M.
2011-09-30
Assessing long-term performance of Category 3 waste cement grouts for radionuclide encasement requires knowledge of the radionuclide-cement interactions and mechanisms of retention (i.e., sorption or precipitation); the mechanism of contaminant release; the significance of contaminant release pathways; how wasteform performance is affected by the full range of environmental conditions within the disposal facility; the process of wasteform aging under conditions that are representative of processes occurring in response to changing environmental conditions within the disposal facility; the effect of wasteform aging on chemical, physical, and radiological properties; and the associated impact on contaminant release. This knowledge will enable accurate predictionmore » of radionuclide fate when the wasteforms come in contact with groundwater. The information present in the report provides data that (1) measures the effect of concrete wasteform properties likely to influence radionuclide migration; and (2) quantifies the rate of carbonation of concrete materials in a simulated vadose zone repository.« less
The cardiac tumors - some exceptional heart conditions.
Cristian, Ana Maria; Moraru, Oriana Elena; Goleanu, Viorel Constantin; Butuşină, Marian; Pinte, Florina; Cotoi, Bogdan Virgil; Cristian, Gabriel
2018-01-01
Cardiac tumors are exceptional cardiac conditions, since they have a minimal occurrence, according to statistics. The cardiac myxoma cases are the most dominant for the representative examples for these clinical situations. Those tumors being benign, the patients enjoy a reasonable life expectancy provided they receive an early diagnosis. In the absence of potential complications, the symptoms can vary very much and they may often be non-specific, a fact which makes it more difficult to establish a proper diagnosis and to quickly tailor the optimal therapeutic solutions. Surgery is, in the most cases, a comfortable solution, allowing the cases to be permanently healed. Nowadays, cardiac surgery provides all the needed facilities to diagnose cases at an early stage, when diagnosis is quick and accurate. This paper illustrates, by the means of two suggestive cases, how difficult it is to establish a quick positive diagnosis, which is vital for healing this condition with an evolutionary risk frequently worsen by major complications.
A Microfluidic Interface for the Culture and Sampling of Adiponectin from Primary Adipocytes
Godwin, Leah A.; Brooks, Jessica C.; Hoepfner, Lauren D.; Wanders, Desiree; Judd, Robert L.; Easley, Christopher J.
2014-01-01
Secreted from adipose tissue, adiponectin is a vital endocrine hormone that acts in glucose metabolism, thereby establishing its crucial role in diabetes, obesity, and other metabolic disease states. Insulin exposure to primary adipocytes cultured in static conditions has been shown to stimulate adiponectin secretion. However, conventional, static methodology for culturing and stimulating adipocytes falls short of truly mimicking physiological environments. Along with decreases in experimental costs and sample volume, and increased temporal resolution, microfluidic platforms permit small-volume flowing cell culture systems, which more accurately represent the constant flow conditions through vasculature in vivo. Here, we have integrated a customized primary tissue culture reservoir into a passively operated microfluidic device made of polydimethylsiloxane (PDMS). Fabrication of the reservoir was accomplished through unique PDMS “landscaping” above sampling channels, with a design strategy targeted to primary adipocytes to overcome issues of positive cell buoyancy. This reservoir allowed three-dimensional culture of primary murine adipocytes, accurate control over stimulants via constant perfusion, and sampling of adipokine secretion during various treatments. As the first report of primary adipocyte culture and sampling within microfluidic systems, this work sets the stage for future studies in adipokine secretion dynamics. PMID:25423362
2-D and 3-D oscillating wing aerodynamics for a range of angles of attack including stall
NASA Technical Reports Server (NTRS)
Piziali, R. A.
1994-01-01
A comprehensive experimental investigation of the pressure distribution over a semispan wing undergoing pitching motions representative of a helicopter rotor blade was conducted. Testing the wing in the nonrotating condition isolates the three-dimensional (3-D) blade aerodynamic and dynamic stall characteristics from the complications of the rotor blade environment. The test has generated a very complete, detailed, and accurate body of data. These data include static and dynamic pressure distributions, surface flow visualizations, two-dimensional (2-D) airfoil data from the same model and installation, and important supporting blockage and wall pressure distributions. This body of data is sufficiently comprehensive and accurate that it can be used for the validation of rotor blade aerodynamic models over a broad range of the important parameters including 3-D dynamic stall. This data report presents all the cycle-averaged lift, drag, and pitching moment coefficient data versus angle of attack obtained from the instantaneous pressure data for the 3-D wing and the 2-D airfoil. Also presented are examples of the following: cycle-to-cycle variations occurring for incipient or lightly stalled conditions; 3-D surface flow visualizations; supporting blockage and wall pressure distributions; and underlying detailed pressure results.
Bongers, Coen C.W.G.; Hopman, Maria T.E.; Eijsvogels, Thijs M.H.
2015-01-01
Exercise results in an increase in core body temperature (Tc), which may reduce exercise performance and eventually can lead to the development of heat-related disorders. Therefore, accurate measurement of Tc during exercise is of great importance, especially in athletes who have to perform in challenging ambient conditions. In the current literature a number of methods have been described to measure the Tc (esophageal, external tympanic membrane, mouth or rectum). However, these methods are suboptimal to measure Tc during exercise since they are invasive, have a slow response or are influenced by environmental conditions. Studies described the use of an ingestible telemetric temperature pill as a reliable and valid method to assess gastrointestinal temperature (Tgi), which is a representative measurement of Tc. Therefore, the goal of this study was to provide a detailed description of the measurement of Tgi using an ingestible telemetric temperature pill. This study addresses important methodological factors that must be taken into account for an accurate measurement. It is recommended to read the instructions carefully in order to ensure that the ingestible telemetric temperature pill is a reliable method to assess Tgi at rest and during exercise. PMID:26485169
The influence of collective neutrino oscillations on a supernova r process
NASA Astrophysics Data System (ADS)
Duan, Huaiyu; Friedland, Alexander; McLaughlin, Gail C.; Surman, Rebecca
2011-03-01
Recently, it has been demonstrated that neutrinos in a supernova oscillate collectively. This process occurs much deeper than the conventional matter-induced Mikheyev-Smirnov-Wolfenstein effect and hence may have an impact on nucleosynthesis. In this paper we explore the effects of collective neutrino oscillations on the r-process, using representative late-time neutrino spectra and outflow models. We find that accurate modeling of the collective oscillations is essential for this analysis. As an illustration, the often-used 'single-angle' approximation makes grossly inaccurate predictions for the yields in our setup. With the proper multiangle treatment, the effect of the oscillations is found to be less dramatic, but still significant. Since the oscillation patterns are sensitive to the details of the emitted fluxes and the sign of the neutrino mass hierarchy, so are the r-process yields. The magnitude of the effect also depends sensitively on the astrophysical conditions—in particular on the interplay between the time when nuclei begin to exist in significant numbers and the time when the collective oscillation begins. A more definitive understanding of the astrophysical conditions, and accurate modeling of the collective oscillations for those conditions, is necessary.
Use of a Principal Components Analysis for the Generation of Daily Time Series.
NASA Astrophysics Data System (ADS)
Dreveton, Christine; Guillou, Yann
2004-07-01
A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.
Measured values of coal mine stopping resistance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oswald, N.; Prosser, B.; Ruckman, R.
2008-12-15
As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less
Dynamic Biological Functioning Important for Simulating and Stabilizing Ocean Biogeochemistry
NASA Astrophysics Data System (ADS)
Buchanan, P. J.; Matear, R. J.; Chase, Z.; Phipps, S. J.; Bindoff, N. L.
2018-04-01
The biogeochemistry of the ocean exerts a strong influence on the climate by modulating atmospheric greenhouse gases. In turn, ocean biogeochemistry depends on numerous physical and biological processes that change over space and time. Accurately simulating these processes is fundamental for accurately simulating the ocean's role within the climate. However, our simulation of these processes is often simplistic, despite a growing understanding of underlying biological dynamics. Here we explore how new parameterizations of biological processes affect simulated biogeochemical properties in a global ocean model. We combine 6 different physical realizations with 6 different biogeochemical parameterizations (36 unique ocean states). The biogeochemical parameterizations, all previously published, aim to more accurately represent the response of ocean biology to changing physical conditions. We make three major findings. First, oxygen, carbon, alkalinity, and phosphate fields are more sensitive to changes in the ocean's physical state. Only nitrate is more sensitive to changes in biological processes, and we suggest that assessment protocols for ocean biogeochemical models formally include the marine nitrogen cycle to assess their performance. Second, we show that dynamic variations in the production, remineralization, and stoichiometry of organic matter in response to changing environmental conditions benefit the simulation of ocean biogeochemistry. Third, dynamic biological functioning reduces the sensitivity of biogeochemical properties to physical change. Carbon and nitrogen inventories were 50% and 20% less sensitive to physical changes, respectively, in simulations that incorporated dynamic biological functioning. These results highlight the importance of a dynamic biology for ocean properties and climate.
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
Electrofishing effort required to estimate biotic condition in southern Idaho Rivers
Maret, Terry R.; Ott, Douglas S.; Herlihy, Alan T.
2007-01-01
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions.
How disgust facilitates avoidance: an ERP study on attention modulation by threats.
Liu, Yunzhe; Zhang, Dandan; Luo, Yuejia
2015-04-01
This study investigated the attention modulation of disgust in comparison with anger in a dot-probe task. Results indicated a two-stage processing of attention modulation by threats. When participants viewed the cues that were represented by Chinese faces (i.e. the in-group condition), it was found at the early processing stage that an angry face elicited a larger occipital P1 component whereas a disgusted face elicited a smaller P1 for validly than for invalidly cued targets. However, the result pattern was reversed at the later processing stage: the P3 amplitudes were larger for valid disgust cues but were smaller for valid angry cues, when both were compared with invalid cue conditions. In addition, when participants viewed the cues that were represented by foreign faces (i.e. the out-group condition), the attention modulation of disgust/anger diminished at the early stage, whereas enhanced P3 amplitudes were observed in response to validly cued targets in both disgusting and angry conditions at the later stage. The current result implied that although people can perceptually differentiate the emotional categories of out-group faces as accurately as in-group faces, they may still be not able to psychologically understand the subtle differences behind different categories of out-group facial expressions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Measurements of the response of transport aircraft ceiling panels to fuel pool fires
NASA Technical Reports Server (NTRS)
Bankston, C. P.; Back, L. H.
1985-01-01
Tests were performed to characterize the responses of various aircraft ceiling panel configurations to a simulated post-crash fire. Attention was given to one currently used and four new ceiling configurations exposed to a fuel pool fire in a circulated air enclosure. The tests were controlled to accurately represent conditions in a real fire. The panels were constructed of fiberglass-epoxy, graphite-phenolic resin, fiberglass-phenolic resin, Kevlar-epoxy, and Kevlar-phenolic resin materials. The phenolic resin-backed sheets performed the best under the circumstances, except when combined with Kevlar, which became porous when charred.
Principles of laser surgery. Advantages and disadvantages.
Ballow, E B
1992-07-01
An attempt has been made in this article to present an honest and accurate state-of-art narrative of laser surgery for pedal conditions. The theory of operation, physiologic effects and procedural comparisons have been presented regarding those procedures and lasers that are available for use by podiatric surgeons and others treating the foot and leg. Although some information described within this article is anecdotal, it is elaborated with representative expert material from the scientific literature. Overall, a sound theoretic understanding of the mode of action of lasers and extensive training and experience is encouraged when engaging in this exciting discipline of surgery.
NASA Technical Reports Server (NTRS)
Hauser, Cavour H; Plohr, Henry W
1951-01-01
The nature of the flow at the exit of a row of turbine blades for the range of conditions represented by four different blade configurations was evaluated by the conservation-of-momentum principle using static-pressure surveys and by analysis of Schlieren photographs of the flow. It was found that for blades of the type investigated, the maximum exit tangential-velocity component is a function of the blade geometry only and can be accurately predicted by the method of characteristics. A maximum value of exit velocity coefficient is obtained at a pressure ratio immediately below that required for maximum blade loading followed by a sharp drop after maximum blade loading occurs.
The wire-mesh sensor as a two-phase flow meter
NASA Astrophysics Data System (ADS)
Shaban, H.; Tavoularis, S.
2015-01-01
A novel gas and liquid flow rate measurement method is proposed for use in vertical upward and downward gas-liquid pipe flows. This method is based on the analysis of the time history of area-averaged void fraction that is measured using a conductivity wire-mesh sensor (WMS). WMS measurements were collected in vertical upward and downward air-water flows in a pipe with an internal diameter of 32.5 mm at nearly atmospheric pressure. The relative frequencies and the power spectral density of area-averaged void fraction were calculated and used as representative properties. Independent features, extracted from these properties using Principal Component Analysis and Independent Component Analysis, were used as inputs to artificial neural networks, which were trained to give the gas and liquid flow rates as outputs. The present method was shown to be accurate for all four encountered flow regimes and for a wide range of flow conditions. Besides providing accurate predictions for steady flows, the method was also tested successfully in three flows with transient liquid flow rates. The method was augmented by the use of the cross-correlation function of area-averaged void fraction determined from the output of a dual WMS unit as an additional representative property, which was found to improve the accuracy of flow rate prediction.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
Cerebellar ataxia: abnormal control of interaction torques across multiple joints.
Bastian, A J; Martin, T A; Keating, J G; Thach, W T
1996-07-01
1. We studied seven subjects with cerebellar lesions and seven control subjects as they made reaching movements in the sagittal plane to a target directly in front of them. Reaches were made under three different conditions: 1) "slow-accurate," 2) "fast-accurate," and 3) "fast as possible." All subjects were videotaped moving in a sagittal plane with markers on the index finger, wrist, elbow, and shoulder. Marker positions were digitized and then used to calculate joint angles. For each of the shoulder, elbow and wrist joints, inverse dynamics equations based on a three-segment limb model were used to estimate the net torque (sum of components) and each of the component torques. The component torques consisted of the torque due to gravity, the dynamic interaction torques induced passively by the movement of the adjacent joint, and the torque produced by the muscles and passive tissue elements (sometimes called "residual" torque). 2. A kinematic analysis of the movement trajectory and the change in joint angles showed that the reaches of subjects with cerebellar lesions were abnormal compared with reaches of control subjects. In both the slow-accurate and fast-accurate conditions the cerebellar subjects made abnormally curved wrist paths; the curvature was greater in the slow-accurate condition. During the slow-accurate condition, cerebellar subjects showed target undershoot and tended to move one joint at a time (decomposition). During the fast-accurate reaches, the cerebellar subjects showed target overshoot. Additionally, in the fast-accurate condition, cerebellar subjects moved the joints at abnormal rates relative to one another, but the movements were less decomposed. Only three subjects were tested in the fast as possible condition; this condition was analyzed only to determine maximal reaching speeds of subjects with cerebellar lesions. Cerebellar subjects moved more slowly than controls in all three conditions. 3. A kinetic analysis of torques generated at each joint during the slow-accurate reaches and the fast-accurate reaches revealed that subjects with cerebellar lesions produced very different torque profiles compared with control subjects. In the slow-accurate condition, the cerebellar subjects produced abnormal elbow muscle torques that prevented the normal elbow extension early in the reach. In the fast-accurate condition, the cerebellar subjects produced inappropriate levels of shoulder muscle torque and also produced elbow muscle torques that did not very appropriately with the dynamic interaction torques that occurred at the elbow. Lack of appropriate muscle torque resulted in excessive contributions of the dynamic interaction torque during the fast-accurate reaches. 4. The inability to produce muscle torques that predict, accommodate, and compensate for the dynamic interaction torques appears to be an important cause of the classic kinematic deficits shown by cerebellar subjects during attempted reaching. These kinematic deficits include incoordination of the shoulder and the elbow joints, a curved trajectory, and overshoot. In the fast-accurate condition, cerebellar subjects often made inappropriate muscle torques relative to the dynamic interaction torques. Because of this, interaction torques often determined the pattern of incoordination of the elbow and shoulder that produced the curved trajectory and target overshoot. In the slow-accurate condition, we reason that the cerebellar subjects may use a decomposition strategy so as to simplify the movement and not have to control both joints simultaneously. From these results, we suggest that a major role of the cerebellum is in generating muscle torques at a joint that will predict the interaction torques being generated by other moving joints and compensate for them as they occur.
Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz
2012-01-01
The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.
Luu, Betty; Rosnay, Marc de; Harris, Paul L
2013-10-01
This study employed the selective trust paradigm to examine how children interpret novel labels when compared with labels they already know to be accurate or inaccurate within the biological domain. The participants--3-, 4-, and 5-year-olds (N=144)--were allocated to one of three conditions. In the accurate versus inaccurate condition, one informant labeled body parts correctly, whereas the other labeled them incorrectly (e.g., calling an eye an "arm"). In the accurate versus novel condition, one informant labeled body parts accurately, whereas the other provided novel labels (e.g., calling an eye a "roke"). Finally, in the inaccurate versus novel condition, one informant labeled body parts incorrectly, whereas the other offered novel labels. In subsequent test trials, the two informants provided conflicting labels for unfamiliar internal organs. In the accurate versus inaccurate condition, children sought and endorsed labels from the accurate informant. In the accurate versus novel condition, only 4- and 5-year-olds preferred the accurate informant, whereas 3-year-olds did not selectively prefer either informant. In the inaccurate versus novel condition, only 5-year-olds preferred the novel informant, whereas 3- and 4-year-olds did not demonstrate a selective preference. Results are supportive of previous studies suggesting that 3-year-olds are sensitive to inaccuracy and that 4-year-olds privilege accuracy. However, 3- and 4-year-olds appear to be unsure as to how the novel informant should be construed. In contrast, 5-year-olds appreciate that speakers offering new information are more trustworthy than those offering inaccurate information, but they are cautious in judging such informants as being "better" at providing that information. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Assessment of distraction from erotic stimuli by nonerotic interference.
Anderson, Alex B; Hamilton, Lisa Dawn
2015-01-01
Distraction from erotic cues during sexual encounters is a major contributor to sexual difficulties in men and women. Being able to assess distraction in studies of sexual arousal will help clarify underlying contributions to sexual problems. The current study aimed to identify the most accurate assessment of distraction from erotic cues in healthy men (n = 29) and women (n = 38). Participants were assigned to a no distraction, low distraction, or high distraction condition. Distraction was induced using an auditory distraction task presented during the viewing of an erotic video. Attention to erotic cues was assessed using three methods: a written quiz, a visual quiz, and a self-reported distraction measure. Genital and psychological sexual responses were also measured. Self-reported distraction and written quiz scores most accurately represented the level of distraction present, while self-reported distraction also corresponded with a decrease in genital arousal. Findings support the usefulness of self-report measures in conjunction with a brief quiz on the erotic material as the most accurate and sensitive ways to simply measure experimentally-induced distraction. Insight into distraction assessment techniques will enable evaluation of naturally occurring distraction in patients suffering from sexual problems.
Thermal responses and perceptions under distinct ambient temperature and wind conditions.
Shimazaki, Yasuhiro; Yoshida, Atsumasa; Yamamoto, Takanori
2015-01-01
Wind conditions are widely recognized to influence the thermal states of humans. In this study, we investigated the relationship between wind conditions and thermal perception and energy balance in humans. The study participants were exposed for 20 min to 3 distinct ambient temperatures, wind speeds, and wind angles. During the exposure, the skin temperatures as a physiological reaction and mental reactions of the human body were measured and the energy balance was calculated based on the human thermal-load method. The results indicate that the human thermal load is an accurate indicator of human thermal states under all wind conditions. Furthermore, wind speed and direction by themselves do not account for the human thermal experience. Because of the thermoregulation that occurs to prevent heat loss and protect the core of the body, a low skin temperature was maintained and regional differences in skin temperature were detected under cool ambient conditions. Thus, the human thermal load, which represents physiological parameters such as skin-temperature change, adequately describes the mixed sensation of the human thermal experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
2017-01-01
Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263
Groschen, George E.; King, Robin B.
2005-01-01
Eight streams, representing a wide range of environmental and water-quality conditions across Illinois, were monitored from July 2001 to October 2003 for five water-quality parameters as part of a pilot study by the U.S. Geological Survey (USGS) in cooperation with the Illinois Environmental Protection Agency (IEPA). Continuous recording multi-parameter water-quality monitors were installed to collect data on water temperature, dissolved-oxygen concentrations, specific conductivity, pH, and turbidity. The monitors were near USGS streamflow-gaging stations where stage and streamflow are continuously recorded. During the study period, the data collected for these five parameters generally met the data-quality objectives established by the USGS and IEPA at all eight stations. A similar pilot study during this period for measurement of chlorophyll concentrations failed to achieve the data-quality objectives. Of all the sensors used, the temperature sensors provided the most accurate and reliable measurements (generally within ?5 percent of a calibrated thermometer reading). Signal adjustments and calibration of all other sensors are dependent upon an accurate and precise temperature measurement. The dissolved-oxygen sensors were the next most reliable during the study and were responsive to changing conditions and accurate at all eight stations. Specific conductivity was the third most accurate and reliable measurement collected from the multi-parameter monitors. Specific conductivity at the eight stations varied widely-from less than 40 microsiemens (?S) at Rayse Creek near Waltonville to greater than 3,500 ?S at Salt Creek at Western Springs. In individual streams, specific conductivity often changed quickly (greater than 25 percent in less than 3 hours) and the sensors generally provided good to excellent record of these variations at all stations. The widest range of specific-conductivity measurements was in Salt Creek at Western Springs in the Greater Chicago metropolitan area. Unlike temperature, dissolved oxygen, and specific conductivity that have been typically measured over a wide range of historical streamflow conditions in many streams, there are few historical turbidity data and the full range of turbidity values is not well known for many streams. Because proposed regional criteria for turbidity in regional streams are based on upper 25th percentiles of concentration in reference streams, accurate determination of the distribution of turbidity in monitored streams is important. Digital data from all five sensors were recorded within each of the eight sondes deployed in the streams and in automated data recorders in the nearby streamflow-gaging houses at each station. The data recorded on each sonde were retrieved to a field laptop computer at each station visit. The feasibility of transmitting these data in near-real time to a central processing point for dissemination on the World-Wide Web was tested successfully. Data collected at all eight stations indicate that a number of factors affect the dissolved-oxygen concentration in the streams and rivers monitored. These factors include: temperature, biological activity, nutrient runoff, and weather (storm runoff). During brief periods usually in late summer, dissolved-oxygen concentrations in half or more of the eight streams and rivers monitored were below the 5 milligrams per liter minimum established by the Illinois Pollution Control Board to protect aquatic life. Because the streams monitored represent a wide range in water-quality and environmental conditions, including diffuse (non-point) runoff and wastewater-effluent contributions, this result indicates that deleterious low dissolved-oxygen concentrations during late summer may be widespread in Illinois streams.
Influence of operating conditions on the optimum design of electric vehicle battery cooling plates
NASA Astrophysics Data System (ADS)
Jarrett, Anthony; Kim, Il Yong
2014-01-01
The efficiency of cooling plates for electric vehicle batteries can be improved by optimizing the geometry of internal fluid channels. In practical operation, a cooling plate is exposed to a range of operating conditions dictated by the battery, environment, and driving behaviour. To formulate an efficient cooling plate design process, the optimum design sensitivity with respect to each boundary condition is desired. This determines which operating conditions must be represented in the design process, and therefore the complexity of designing for multiple operating conditions. The objective of this study is to determine the influence of different operating conditions on the optimum cooling plate design. Three important performance measures were considered: temperature uniformity, mean temperature, and pressure drop. It was found that of these three, temperature uniformity was most sensitive to the operating conditions, especially with respect to the distribution of the input heat flux, and also to the coolant flow rate. An additional focus of the study was the distribution of heat generated by the battery cell: while it is easier to assume that heat is generated uniformly, by using an accurate distribution for design optimization, this study found that cooling plate performance could be significantly improved.
Qin, Yu; Yi, Shuhua
2013-01-01
Accurately estimating daily mean ecosystem respiration rate (Re) is important for understanding how ecosystem carbon budgets will respond to climate change. Usually, daily mean Re is represented by measurement using static chamber on alpine meadow ecosystems from 9:00 to 11:00 h a.m. local time directly. In the present study, however, we found that the calculated daily mean Re from 9:00 to 11:00 h a.m. local time was significantly higher than that from 0:00 to 23:30 h local time in an alpine meadow site, which might be caused by special climate condition on the Qinghai-Tibetan Plateau. Our results indicated that the calculated daily mean Re from 9:00 to 11:00 h a.m. local time cannot be used to represent daily mean Re directly.
A multiscale model for reinforced concrete with macroscopic variation of reinforcement slip
NASA Astrophysics Data System (ADS)
Sciegaj, Adam; Larsson, Fredrik; Lundgren, Karin; Nilenius, Filip; Runesson, Kenneth
2018-06-01
A single-scale model for reinforced concrete, comprising the plain concrete continuum, reinforcement bars and the bond between them, is used as a basis for deriving a two-scale model. The large-scale problem, representing the "effective" reinforced concrete solid, is enriched by an effective reinforcement slip variable. The subscale problem on a Representative Volume Element (RVE) is defined by Dirichlet boundary conditions. The response of the RVEs of different sizes was investigated by means of pull-out tests. The resulting two-scale formulation was used in an FE^2 analysis of a deep beam. Load-deflection relations, crack widths, and strain fields were compared to those obtained from a single-scale analysis. Incorporating the independent macroscopic reinforcement slip variable resulted in a more pronounced localisation of the effective strain field. This produced a more accurate estimation of the crack widths than the two-scale formulation neglecting the effective reinforcement slip variable.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Algodystrophy: complex regional pain syndrome and incomplete forms
Giannotti, Stefano; Bottai, Vanna; Dell’Osso, Giacomo; Bugelli, Giulia; Celli, Fabio; Cazzella, Niki; Guido, Giulio
2016-01-01
Summary The algodystrophy, also known as complex regional pain syndrome (CRPS), is a painful disease characterized by erythema, edema, functional impairment, sensory and vasomotor disturbance. The diagnosis of CRPS is based solely on clinical signs and symptoms, and for exclusion compared to other forms of chronic pain. There is not a specific diagnostic procedure; careful clinical evaluation and additional test should lead to an accurate diagnosis. There are similar forms of chronic pain known as bone marrow edema syndrome, in which is absent the history of trauma or triggering events and the skin dystrophic changes and vasomotor alterations. These incomplete forms are self-limited, and surgical treatment is generally not needed. It is still controversial, if these forms represent a distinct self-limiting entity or an incomplete variant of CRPS. In painful unexplained conditions such as frozen shoulder, post-operative stiff shoulder or painful knee prosthesis, the algodystrophy, especially in its incomplete forms, could represent the cause. PMID:27252736
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques
2016-10-01
Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, J.; Winkler, J.; Christensen, D.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less
NASA Astrophysics Data System (ADS)
Perreard, S.; Wildner, E.
1994-12-01
Many processes are controlled by experts using some kind of mental model to decide on actions and make conclusions. This model, based on heuristic knowledge, can often be represented by rules and does not have to be particularly accurate. Such is the case for the problem of conditioning high voltage RF cavities; the expert has to decide, by observing some criteria, whether to increase or to decrease the voltage and by how much. A program has been implemented which can be applied to a class of similar problems. The kernel of the program is a small rule base, which is independent of the kind of cavity. To model a specific cavity, we use fuzzy logic which is implemented as a separate routine called by the rule base, to translate from numeric to symbolic information.
Börlin, Christoph S; Lang, Verena; Hamacher-Brady, Anne; Brady, Nathan R
2014-09-10
Autophagy is a vesicle-mediated pathway for lysosomal degradation, essential under basal and stressed conditions. Various cellular components, including specific proteins, protein aggregates, organelles and intracellular pathogens, are targets for autophagic degradation. Thereby, autophagy controls numerous vital physiological and pathophysiological functions, including cell signaling, differentiation, turnover of cellular components and pathogen defense. Moreover, autophagy enables the cell to recycle cellular components to metabolic substrates, thereby permitting prolonged survival under low nutrient conditions. Due to the multi-faceted roles for autophagy in maintaining cellular and organismal homeostasis and responding to diverse stresses, malfunction of autophagy contributes to both chronic and acute pathologies. We applied a systems biology approach to improve the understanding of this complex cellular process of autophagy. All autophagy pathway vesicle activities, i.e. creation, movement, fusion and degradation, are highly dynamic, temporally and spatially, and under various forms of regulation. We therefore developed an agent-based model (ABM) to represent individual components of the autophagy pathway, subcellular vesicle dynamics and metabolic feedback with the cellular environment, thereby providing a framework to investigate spatio-temporal aspects of autophagy regulation and dynamic behavior. The rules defining our ABM were derived from literature and from high-resolution images of autophagy markers under basal and activated conditions. Key model parameters were fit with an iterative method using a genetic algorithm and a predefined fitness function. From this approach, we found that accurate prediction of spatio-temporal behavior required increasing model complexity by implementing functional integration of autophagy with the cellular nutrient state. The resulting model is able to reproduce short-term autophagic flux measurements (up to 3 hours) under basal and activated autophagy conditions, and to measure the degree of cell-to-cell variability. Moreover, we experimentally confirmed two model predictions, namely (i) peri-nuclear concentration of autophagosomes and (ii) inhibitory lysosomal feedback on mTOR signaling. Agent-based modeling represents a novel approach to investigate autophagy dynamics, function and dysfunction with high biological realism. Our model accurately recapitulates short-term behavior and cell-to-cell variability under basal and activated conditions of autophagy. Further, this approach also allows investigation of long-term behaviors emerging from biologically-relevant alterations to vesicle trafficking and metabolic state.
The accurate reconstruction of gene regulatory networks from large scale molecular profile datasets represents one of the grand challenges of Systems Biology. The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) represents one of the most effective tools to accomplish this goal. However, the initial Fixed Bandwidth (FB) implementation is both inefficient and unable to deal with sample sets providing largely uneven coverage of the probability density space.
A sophisticated simulation for the fracture behavior of concrete material using XFEM
NASA Astrophysics Data System (ADS)
Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili
2017-10-01
The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
NASA Technical Reports Server (NTRS)
Heady, Joel; Pereira, J. Michael; Ruggeri, Charles R.; Bobula, George A.
2009-01-01
A test methodology currently employed for large engines was extended to quantify the ballistic containment capability of a small turboshaft engine compressor case. The approach involved impacting the inside of a compressor case with a compressor blade. A gas gun propelled the blade into the case at energy levels representative of failed compressor blades. The test target was a full compressor case. The aft flange was rigidly attached to a test stand and the forward flange was attached to a main frame to provide accurate boundary conditions. A window machined in the case allowed the projectile to pass through and impact the case wall from the inside with the orientation, direction and speed that would occur in a blade-out event. High-peed, digital-video cameras provided accurate velocity and orientation data. Calibrated cameras and digital image correlation software generated full field displacement and strain information at the back side of the impact point.
On the stress calculation within phase-field approaches: a model for finite deformations
NASA Astrophysics Data System (ADS)
Schneider, Daniel; Schwab, Felix; Schoof, Ephraim; Reiter, Andreas; Herrmann, Christoph; Selzer, Michael; Böhlke, Thomas; Nestler, Britta
2017-08-01
Numerical simulations based on phase-field methods are indispensable in order to investigate interesting and important phenomena in the evolution of microstructures. Microscopic phase transitions are highly affected by mechanical driving forces and therefore the accurate calculation of the stresses in the transition region is essential. We present a method for stress calculations within the phase-field framework, which satisfies the mechanical jump conditions corresponding to sharp interfaces, although the sharp interface is represented as a volumetric region using the phase-field approach. This model is formulated for finite deformations, is independent of constitutive laws, and allows using any type of phase inherent inelastic strains.
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.
2014-10-01
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less
Attrition Rate of Oxygen Carriers in Chemical Looping Combustion Systems
NASA Astrophysics Data System (ADS)
Feilen, Harry Martin
This project developed an evaluation methodology for determining, accurately and rapidly, the attrition resistance of oxygen carrier materials used in chemical looping technologies. Existing test protocols, to evaluate attrition resistance of granular materials, are conducted under non-reactive and ambient temperature conditions. They do not accurately reflect the actual behavior under the unique process conditions of chemical looping, including high temperatures and cyclic operation between oxidizing and reducing atmospheres. This project developed a test method and equipment that represented a significant improvement over existing protocols. Experimental results obtained from this project have shown that hematite exhibits different modes of attrition, including both due to mechanical stresses and due to structural changes in the particles due to chemical reaction at high temperature. The test methodology has also proven effective in providing reactivity changes of the material with continued use, a property, which in addition to attrition, determines material life. Consumption/replacement cost due to attrition or loss of reactivity is a critical factor in the economic application of the chemical looping technology. This test method will allow rapid evaluation of a wide range of materials that are best suited for this technology. The most important anticipated public benefit of this project is the acceleration of the development of chemical looping technology for lowering greenhouse gas emissions from fossil fuel combustion.
Putting emotions in routes: the influence of emotionally laden landmarks on spatial memory.
Ruotolo, F; Claessen, M H G; van der Ham, I J M
2018-04-16
The aim of this study was to assess how people memorize spatial information of emotionally laden landmarks along a route and if the emotional value of the landmarks affects the way metric and configurational properties of the route itself are represented. Three groups of participants were asked to watch a movie of a virtual walk along a route. The route could contain positive, negative, or neutral landmarks. Afterwards, participants were asked to: (a) recognize the landmarks; (b) imagine to walk distances between landmarks; (c) indicate the position of the landmarks along the route; (d) judge the length of the route; (e) draw the route. Results showed that participants who watched the route with positive landmarks were more accurate in locating the landmarks along the route and drawing the route. On the other hand, participants in the negative condition judged the route as longer than participants in the other two conditions and were less accurate in mentally reproducing distances between landmarks. The data will be interpreted in the light of the "feelings-as-information theory" by Schwarz (2010) and the most recent evidence about the effect of emotions on spatial memory. In brief, the evidence collected in this study supports the idea that spatial cognition emerges from the interaction between an organism and contextual characteristics.
NSRD-15:Computational Capability to Substantiate DOE-HDBK-3010 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Bignell, John; Dingreville, Remi Philippe Michel
Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook’s bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived frommore » very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes.« less
Constitutive Soil Properties for Mason Sand and Kennedy Space Center
NASA Technical Reports Server (NTRS)
Thomas, Michael A.; Chitty, Daniel E.
2011-01-01
Accurate soil models are required for numerical simulations of land landings for the Orion Crew Exploration Vehicle (CEV). This report provides constitutive material models for two soil conditions at Kennedy Space Center (KSC) and four conditions of Mason Sand. The Mason Sand is the test sand for LaRC s drop tests and swing tests of the Orion. The soil models are based on mechanical and compressive behavior observed during geotechnical laboratory testing of remolded soil samples. The test specimens were reconstituted to measured in situ density and moisture content. Tests included: triaxial compression, hydrostatic compression, and uniaxial strain. A fit to the triaxial test results defines the strength envelope. Hydrostatic and uniaxial tests define the compressibility. The constitutive properties are presented in the format of LSDYNA Material Model 5: Soil and Foam. However, the laboratory test data provided can be used to construct other material models. The soil models are intended to be specific to the soil conditions they were tested at. The two KSC models represent two conditions at KSC: low density dry sand and high density in-situ moisture sand. The Mason Sand model was tested at four conditions which encompass measured conditions at LaRC s drop test site.
Murdoch, Maureen; Simon, Alisha Baines; Polusny, Melissa Anderson; Bangerter, Ann Kay; Grill, Joseph Patrick; Noorbaloochi, Siamak; Partin, Melissa Ruth
2014-07-16
Anonymous survey methods appear to promote greater disclosure of sensitive or stigmatizing information compared to non-anonymous methods. Higher disclosure rates have traditionally been interpreted as being more accurate than lower rates. We examined the impact of 3 increasingly private mailed survey conditions-ranging from potentially identifiable to completely anonymous-on survey response and on respondents' representativeness of the underlying sampling frame, completeness in answering sensitive survey items, and disclosure of sensitive information. We also examined the impact of 2 incentives ($10 versus $20) on these outcomes. A 3X2 factorial, randomized controlled trial of 324 representatively selected, male Gulf War I era veterans who had applied for United States Department of Veterans Affairs (VA) disability benefits. Men were asked about past sexual assault experiences, childhood abuse, combat, other traumas, mental health symptoms, and sexual orientation. We used a novel technique, the pre-merged questionnaire, to link anonymous responses to administrative data. Response rates ranged from 56.0% to 63.3% across privacy conditions (p = 0.49) and from 52.8% to 68.1% across incentives (p = 0.007). Respondents' characteristics differed by privacy and by incentive assignments, with completely anonymous respondents and $20 respondents appearing least different from their non-respondent counterparts. Survey completeness did not differ by privacy or by incentive. No clear pattern of disclosing sensitive information by privacy condition or by incentive emerged. For example, although all respondents came from the same sampling frame, estimates of sexual abuse ranged from 13.6% to 33.3% across privacy conditions, with the highest estimate coming from the intermediate privacy condition (p = 0.007). Greater privacy and larger incentives do not necessarily result in higher disclosure rates of sensitive information than lesser privacy and lower incentives. Furthermore, disclosure of sensitive or stigmatizing information under differing privacy conditions may have less to do with promoting or impeding participants' "honesty" or "accuracy" than with selectively recruiting or attracting subpopulations that are higher or lower in such experiences. Pre-merged questionnaires bypassed many historical limitations of anonymous surveys and hold promise for exploring non-response issues in future research.
Caçola, Priscila; Gabbard, Carl
2012-04-01
This study examined age-related characteristics associated with tool use in the perception and modulation of peripersonal and extrapersonal space. Seventy-six (76) children representing age groups 7-, 9-, 11 years and 36 adults were presented with two experiments using an estimation of reach paradigm involving arm and tool conditions and a switch-block of the opposite condition. Experiment 1 tested Arm and Tool (20 cm length) estimation and found a significant effect for Age, Space, and an Age × Space interaction (ps < 0.05). Both children and adults were less accurate in extrapersonal space, indicating an overestimation bias. Interestingly, the adjustment period during the switch-block condition was immediate and similar across age. Experiment 2 was similar to Experiment 1 with the exception of using a 40-cm-length tool. Results also revealed an age effect and a difference in Space (ps < 0.05), however, participants underestimated. Speculatively, participants were less confident when presented with a longer tool, even though the adjustment period with both tool lengths was similar. Considered together, these results hint that: (1) children as young as 6 years of age are capable of being as accurate when estimating reach with a tool as they are with their arm, (2) the adjustment period associated with extending and retracting spaces is immediate rather than gradual, and (3) tool length influences estimations of reach.
NASA Technical Reports Server (NTRS)
Gnoffo, P. A.
1978-01-01
An investigation has been made into the ability of a method of integral relations to calculate inviscid zero degree angle of attack, radiative heating distributions over blunt, sonic corner bodies for some representative outer planet entry conditions is investigated. Comparisons have been made with a more detailed numerical method, a time asymptotic technique, using the same equilibrium chemistry and radiation transport subroutines. An effort to produce a second order approximation (two-strip) method of integral relations code to aid in this investigation is also described and a modified two-strip routine is presented. Results indicate that the one-strip method of integral relations cannot be used to obtain accurate estimates of the radiative heating distribution because of its inability to resolve thermal gradients near the wall. The two-strip method can sometimes be used to improve these estimates; however, the two-strip method has only a small range of conditions over which it will yield significant improvement over the one-strip method.
Flame extinction limit and particulates formation in fuel blends
NASA Astrophysics Data System (ADS)
Subramanya, Mahesh
Many fuels used in material processing and power generation applications are generally a blend of various hydrocarbons. Although the combustion and aerosol formation dynamics of individual fuels is well understood, the flame dynamics of fuel blends are yet to be characterized. This research uses a twin flame counterflow burner to measure flame velocity, flame extinction, particulate formation and particulate morphology of hydrogen fuel blend flames at different H2 concentration, oscillation frequencies and stretch conditions. Phase resolved spectroscopic measurements (emission spectra) of OH, H, O and CH radical/atom concentrations is used to characterize the heat release processes of the flame. In addition flame generated particulates are collected using thermophoretic sample technique and are qualitative analyzed using Raman Spectroscopy and SEM. Such measurements are essential for the development of advanced computational tools capable of predicting fuel blend flame characteristics at realistic combustor conditions. The data generated through the measurements of this research are representative, and yet accurate, with unique well defined boundary conditions which can be reproduced in numerical computations for kinetic code validations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, G.S.; Trudeau, D.A.; Savard, C.S.
The underground nuclear testing program of the US Department of Energy (USDOE) takes place at the Nevada Test Site (NTS), about 65 mi north-west of Las Vegas, Nevada. Underground nuclear tests at Yucca Flat, one of the USDOE test areas at NTS, have affected hydrologic conditions, including groundwater levels. The purpose of this map report, prepared in cooperation with USDOE, is to present selected water-level data collected from wells and test holes through December 1991, and to show potentiometric contours representing 1991 water-table conditions in the Yucca Flat area. The more generic term, potentiometric contours, is used herein rather thanmore » ``water-table contours`` because the hydrologic units contributing water to wells and test holes may not accurately represent the water table. The water table is that surface in an unconfined water body at which the pressure is atmospheric. It is defined by the altitude at which non- perched ground water is first found in wells and test holes. Perched ground water is defined as unconfined ground water separated from an underlying body of ground water by an unsaturated zone. This map report updates information on water levels in some wells and test holes and the resulting water-table contours in rocks of Cenozoic and Paleozoic age shown by Doty and Thordarson for 1980 conditions.« less
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Carlson, J.-R.; Hannon, J. A.; Jenkins, L. N.; Bartram, S. M.; Pulliam, T. H.; Lee, H. C.
2017-01-01
Because future wind tunnel tests associated with the NASA Juncture Flow project are being designed for the purpose of CFD validation, considerable effort is going into the characterization of the wind tunnel boundary conditions, particularly at inflow. This is important not only because wind tunnel flowfield nonuniformities can play a role in integrated testing uncertainties, but also because the better the boundary conditions are known, the better CFD can accurately represent the experiment. This paper describes recent investigative wind tunnel tests involving two methods to measure and characterize the oncoming flow in the NASA Langley 14- by 22-Foot Subsonic Tunnel. The features of each method, as well as some of their pros and cons, are highlighted. Boundary conditions and modeling tactics currently used by CFD for empty-tunnel simulations are also described, and some results using three different CFD codes are shown. Preliminary CFD parametric studies associated with the Juncture Flow model are summarized, to determine sensitivities of the flow near the wing-body juncture region of the model to a variety of modeling decisions.
Structural Health Management of Damaged Aircraft Structures Using the Digital Twin Concept
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Krishnamurthy, Thiagarajan
2017-01-01
The development of multidisciplinary integrated Structural Health Management (SHM) tools will enable accurate detection, and prognosis of damaged aircraft under normal and adverse conditions during flight. As part of the digital twin concept, methodologies are developed by using integrated multiphysics models, sensor information and input data from an in-service vehicle to mirror and predict the life of its corresponding physical twin. SHM tools are necessary for both damage diagnostics and prognostics for continued safe operation of damaged aircraft structures. The adverse conditions include loss of control caused by environmental factors, actuator and sensor faults or failures, and structural damage conditions. A major concern in these structures is the growth of undetected damage/cracks due to fatigue and low velocity foreign object impact that can reach a critical size during flight, resulting in loss of control of the aircraft. To avoid unstable, catastrophic propagation of damage during a flight, load levels must be maintained that are below a reduced load-carrying capacity for continued safe operation of an aircraft. Hence, a capability is needed for accurate real-time predictions of damage size and safe load carrying capacity for structures with complex damage configurations. In the present work, a procedure is developed that uses guided wave responses to interrogate damage. As the guided wave interacts with damage, the signal attenuates in some directions and reflects in others. This results in a difference in signal magnitude as well as phase shifts between signal responses for damaged and undamaged structures. Accurate estimation of damage size, location, and orientation is made by evaluating the cumulative signal responses at various pre-selected sensor locations using a genetic algorithm (GA) based optimization procedure. The damage size, location, and orientation is obtained by minimizing the difference between the reference responses and the responses obtained by wave propagation finite element analysis of different representative cracks, geometries, and sizes.
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
On Efficient Multigrid Methods for Materials Processing Flows with Small Particles
NASA Technical Reports Server (NTRS)
Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael
2004-01-01
Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.
Three-dimensional Finite Element Modelling of Composite Slabs for High Speed Rails
NASA Astrophysics Data System (ADS)
Mlilo, Nhlanganiso; Kaewunruen, Sakdirat
2017-12-01
Currently precast steel-concrete composite slabs are being considered on railway bridges as a viable alternative replacement for timber sleepers. However, due to their nature and the loading conditions, their behaviour is often complex. Present knowledge of the behaviour of precast steel-concrete composite slabs subjected to rail loading is limited. FEA is an important tool used to simulate real life behaviour and is widely accepted in many disciples of engineering as an alternative to experimental test methods, which are often costly and time consuming. This paper seeks to detail FEM of precast steel-concrete slabs subjected to standard in-service loading in high-speed rail with focus on the importance of accurately defining material properties, element type, mesh size, contacts, interactions and boundary conditions that will give results representative of real life behaviour. Initial finite element model show very good results, confirming the accuracy of the modelling procedure
An inversion strategy for energy saving in smart building through wireless monitoring
NASA Astrophysics Data System (ADS)
Anselmi, N.; Moriyama, T.
2017-10-01
The building plants represent one of the main sources of power consumption and of greenhouse gases emission in urban scenarios. The efficiency of energy management is also related to the indoor environmental conditions that reflect on the user comfort. The constant monitoring of comfort indicators enables the accurate management of building plants with the final objective of reducing energy waste and satisfying the user needs. This paper presents an inversion methodology based on support vector regression for the reconstruction and forecasting of the thermal comfort of users starting from the indoor environmental features of the building. The environmental monitoring is performed by means of a wireless sensor network, which pervasively measures the spatial variability of indoor conditions. The proposed system has been experimentally validated in a real test-site to assess the advantages and the limitations in supporting the management of the building plants towards energy saving.
NASA Astrophysics Data System (ADS)
Saide, Pablo E.; Carmichael, Gregory R.; Spak, Scott N.; Gallardo, Laura; Osses, Axel E.; Mena-Carrasco, Marcelo A.; Pagowski, Mariusz
2011-05-01
This study presents a system to predict high pollution events that develop in connection with enhanced subsidence due to coastal lows, particularly in winter over Santiago de Chile. An accurate forecast of these episodes is of interest since the local government is entitled by law to take actions in advance to prevent public exposure to PM10 concentrations in excess of 150 μg m -3 (24 h running averages). The forecasting system is based on accurately simulating carbon monoxide (CO) as a PM10/PM2.5 surrogate, since during episodes and within the city there is a high correlation (over 0.95) among these pollutants. Thus, by accurately forecasting CO, which behaves closely to a tracer on this scale, a PM estimate can be made without involving aerosol-chemistry modeling. Nevertheless, the very stable nocturnal conditions over steep topography associated with maxima in concentrations are hard to represent in models. Here we propose a forecast system based on the WRF-Chem model with optimum settings, determined through extensive testing, that best describe both meteorological and air quality available measurements. Some of the important configurations choices involve the boundary layer (PBL) scheme, model grid resolution (both vertical and horizontal), meteorological initial and boundary conditions and spatial and temporal distribution of the emissions. A forecast for the 2008 winter is performed showing that this forecasting system is able to perform similarly to the authority decision for PM10 and better than persistence when forecasting PM10 and PM2.5 high pollution episodes. Problems regarding false alarm predictions could be related to different uncertainties in the model such as day to day emission variability, inability of the model to completely resolve the complex topography and inaccuracy in meteorological initial and boundary conditions. Finally, according to our simulations, emissions from previous days dominate episode concentrations, which highlights the need for 48 h forecasts that can be achieved by the system presented here. This is in fact the largest advantage of the proposed system.
Influence of different land surfaces on atmospheric conditions measured by a wireless sensor network
NASA Astrophysics Data System (ADS)
Lengfeld, Katharina; Ament, Felix
2010-05-01
Atmospheric conditions close to the surface, like temperature, wind speed and humidity, vary on small scales because of surface heterogeneities. Therefore, the traditional measuring approach of using a single, highly accurate station is of limited representativeness for a larger domain, because it is not able to determine these small scale variabilities. However, both the variability and the domain averages are important information for the development and validation of atmospheric models and soil-vegetation-atmosphere-transfer (SVAT) schemes. Due to progress in microelectronics it is possible to construct networks of comparably cheap meteorological stations with moderate accuracy. Such a network provides data in high spatial and temporal resolution. The EPFL Lausanne developed such a network called SensorScope, consisting of low cost autonomous stations. Each station observes air and surface temperature, humidity, wind direction and speed, incoming solar radiation, precipitations, soil moisture and soil temperature and sends the data via radio communication to a base station. This base station forwards the collected data via GSM/GPRS to a central server. Within the FLUXPAT project in August 2009 we deployed 15 stations as a twin transect near Jülich, Germany. One aim of this first experiment was to test the quality of the low cost sensors by comparing them to more accurate reference measurements. It turned out, that although the network is not highly accurate, the measurements are consistent. Consequently an analysis of the pattern of atmospheric conditions is feasible. For example, we detect a variability of ± 0.5K in the mean temperature at a distance of only 2.3 km. The transect covers different types of vegetation and a small river. Therefore, we analyzed the influence of different land surfaces and the distance to the river on meteorological conditions. On the one hand, some results meet our expectations, e.g. the relative humidity decreases with increasing distance to the river. But on the other hand we found unexpected anomalies in the air temperature, which will be discussed in detail by selected case studies.
Repairable-conditionally repairable damage model based on dual Poisson processes.
Lind, B K; Persson, L M; Edgren, M R; Hedlöf, I; Brahme, A
2003-09-01
The advent of intensity-modulated radiation therapy makes it increasingly important to model the response accurately when large volumes of normal tissues are irradiated by controlled graded dose distributions aimed at maximizing tumor cure and minimizing normal tissue toxicity. The cell survival model proposed here is very useful and flexible for accurate description of the response of healthy tissues as well as tumors in classical and truly radiobiologically optimized radiation therapy. The repairable-conditionally repairable (RCR) model distinguishes between two different types of damage, namely the potentially repairable, which may also be lethal, i.e. if unrepaired or misrepaired, and the conditionally repairable, which may be repaired or may lead to apoptosis if it has not been repaired correctly. When potentially repairable damage is being repaired, for example by nonhomologous end joining, conditionally repairable damage may require in addition a high-fidelity correction by homologous repair. The induction of both types of damage is assumed to be described by Poisson statistics. The resultant cell survival expression has the unique ability to fit most experimental data well at low doses (the initial hypersensitive range), intermediate doses (on the shoulder of the survival curve), and high doses (on the quasi-exponential region of the survival curve). The complete Poisson expression can be approximated well by a simple bi-exponential cell survival expression, S(D) = e(-aD) + bDe(-cD), where the first term describes the survival of undamaged cells and the last term represents survival after complete repair of sublethal damage. The bi-exponential expression makes it easy to derive D(0), D(q), n and alpha, beta values to facilitate comparison with classical cell survival models.
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562
NASA Astrophysics Data System (ADS)
O'Neill, A.; Erikson, L. H.; Barnard, P.
2013-12-01
While Global Climate Models (GCMs) provide useful projections of near-surface wind vectors into the 21st century, resolution is not sufficient enough for use in regional wave modeling. Statistically downscaled GCM projections from Multivariate Adaptive Constructed Analogues (MACA) provide daily near-surface winds at an appropriate spatial resolution for wave modeling within San Francisco Bay. Using 30 years (1975-2004) of climatological data from four representative stations around San Francisco Bay, a library of example daily wind conditions for four corresponding over-water sub-regions is constructed. Empirical cumulative distribution functions (ECDFs) of station conditions are compared to MACA GFDL hindcasts to create correction factors, which are then applied to 21st century MACA wind projections. For each projection day, a best match example is identified via least squares error among all stations from the library. The best match's daily variation in velocity components (u/v) is used as an analogue of representative wind variation and is applied at 3-hour increments about the corresponding sub-region's projected u/v values. High temporal resolution reconstructions using this methodology on hindcast MACA fields from 1975-2004 accurately recreate extreme wind values within the San Francisco Bay, and because these extremes in wind forcing are of key importance in wave and subsequent coastal flood modeling, this represents a valuable method of generating near-surface wind vectors for use in coastal flood modeling.
Claassen, Hans C.
1982-01-01
Obtaining ground-water samples that accurately represent the water chemistry of an aquifer is a complex task. Before a ground-water sampling program can be started, an understanding of the kind of chemical data needed and the potential changes in water chemistry resulting from various drilling, well-completion, and sampling techniques is needed. This report provides a basis for such an evaluation and permits a choice of techniques that will result in obtaining the best possible data for the time and money allocated.
Mammalian choices: combining fast-but-inaccurate and slow-but-accurate decision-making systems.
Trimmer, Pete C; Houston, Alasdair I; Marshall, James A R; Bogacz, Rafal; Paul, Elizabeth S; Mendl, Mike T; McNamara, John M
2008-10-22
Empirical findings suggest that the mammalian brain has two decision-making systems that act at different speeds. We represent the faster system using standard signal detection theory. We represent the slower (but more accurate) cortical system as the integration of sensory evidence over time until a certain level of confidence is reached. We then consider how two such systems should be combined optimally for a range of information linkage mechanisms. We conclude with some performance predictions that will hold if our representation is realistic.
Magnetic resonance imaging of optic nerve
Gala, Foram
2015-01-01
Optic nerves are the second pair of cranial nerves and are unique as they represent an extension of the central nervous system. Apart from clinical and ophthalmoscopic evaluation, imaging, especially magnetic resonance imaging (MRI), plays an important role in the complete evaluation of optic nerve and the entire visual pathway. In this pictorial essay, the authors describe segmental anatomy of the optic nerve and review the imaging findings of various conditions affecting the optic nerves. MRI allows excellent depiction of the intricate anatomy of optic nerves due to its excellent soft tissue contrast without exposure to ionizing radiation, better delineation of the entire visual pathway, and accurate evaluation of associated intracranial pathologies. PMID:26752822
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doubrawa Moreira, Paula; Annoni, Jennifer; Jonkman, Jason
FAST.Farm is a medium-delity wind farm modeling tool that can be used to assess power and loads contributions of wind turbines in a wind farm. The objective of this paper is to undertake a calibration procedure to set the user parameters of FAST.Farm to accurately represent results from large-eddy simulations. The results provide an in- depth analysis of the comparison of FAST.Farm and large-eddy simulations before and after calibration. The comparison of FAST.Farm and large-eddy simulation results are presented with respect to streamwise and radial velocity components as well as wake-meandering statistics (mean and standard deviation) in the lateral andmore » vertical directions under different atmospheric and turbine operating conditions.« less
Penetrating cardiac injury by wire thrown from a lawn mower.
Rubio, P A; Reul, G J
1979-01-01
The first successful surgically treated case of penetrating heart injury, specifically the right ventricle, caused by a fragment of coat hanger wire thrown by a lawn mower, is reported. Though traumatic heart injuries are rare, this case represents accurate surgical management and judgment, especially in the preoperative phase which resulted in early operating and excellent postoperative results. It is our feeling that if the patient can be transferred safely to the operating room the mortality rate is considerably lowered; however, emergency room thoracotomy, which will undoubtedly result in a greater survival rate from these spectacular injuries, should be performed in the emergency center if cardiac activity ceases or the patient's condition deteriorates considerably.
A New Framework for Quantifying Lidar Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.
2017-03-24
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classificationmore » of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.« less
Voluntary modulation of anterior cingulate response to negative feedback.
Shane, Matthew S; Weywadt, Christina R
2014-01-01
Anterior cingulate and medial frontal cortex (dACC/mFC) response to negative feedback represents the actions of a generalized error-monitoring system critical for the management of goal-directed behavior. Magnitude of dACC/mFC response to negative feedback correlates with levels of post-feedback behavioral change, and with proficiency of operant learning processes. With this in mind, it follows that an ability to alter dACC/mFC response to negative feedback may lead to representative changes in operant learning proficiency. To this end, the present study investigated the extent to which healthy individuals would show modulation of their dACC/mFC response when instructed to try to either maximize or minimize their neural response to the presentation of contingent negative feedback. Participants performed multiple runs of a standard time-estimation task, during which they received feedback regarding their ability to accurately estimate a one-second duration. On Watch runs, participants were simply instructed to try to estimate as closely as possible the one second duration. On Increase and Decrease runs, participants performed the same task, but were instructed to "try to increase [decrease] their brain's response every time they received negative feedback". Results indicated that participants showed changes in dACC/mFC response under these differing instructional conditions: dACC/mFC activity following negative feedback was higher in the Increase condition, and dACC activity trended lower in the Decrease condition, compared to the Watch condition. Moreover, dACC activity correlated with post-feedback performance adjustments, and these adjustments were highest in the Increase condition. Potential implications for neuromodulation and facilitated learning are discussed.
Sana, Theodore R; Roark, Joseph C; Li, Xiangdong; Waddell, Keith; Fischer, Steven M
2008-09-01
In an effort to simplify and streamline compound identification from metabolomics data generated by liquid chromatography time-of-flight mass spectrometry, we have created software for constructing Personalized Metabolite Databases with content from over 15,000 compounds pulled from the public METLIN database (http://metlin.scripps.edu/). Moreover, we have added extra functionalities to the database that (a) permit the addition of user-defined retention times as an orthogonal searchable parameter to complement accurate mass data; and (b) allow interfacing to separate software, a Molecular Formula Generator (MFG), that facilitates reliable interpretation of any database matches from the accurate mass spectral data. To test the utility of this identification strategy, we added retention times to a subset of masses in this database, representing a mixture of 78 synthetic urine standards. The synthetic mixture was analyzed and screened against this METLIN urine database, resulting in 46 accurate mass and retention time matches. Human urine samples were subsequently analyzed under the same analytical conditions and screened against this database. A total of 1387 ions were detected in human urine; 16 of these ions matched both accurate mass and retention time parameters for the 78 urine standards in the database. Another 374 had only an accurate mass match to the database, with 163 of those masses also having the highest MFG score. Furthermore, MFG calculated a formula for a further 849 ions that had no match to the database. Taken together, these results suggest that the METLIN Personal Metabolite database and MFG software offer a robust strategy for confirming the formula of database matches. In the event of no database match, it also suggests possible formulas that may be helpful in interpreting the experimental results.
Ferguson, William J; Louie, Richard F; Tang, Chloe S; Paw U, Kyaw Tha; Kost, Gerald J
2014-02-01
During disasters and complex emergencies, environmental conditions can adversely affect the performance of point-of-care (POC) testing. Knowledge of these conditions can help device developers and operators understand the significance of temperature and humidity limits necessary for use of POC devices. First responders will benefit from improved performance for on-site decision making. To create dynamic temperature and humidity profiles that can be used to assess the environmental robustness of POC devices, reagents, and other resources (eg, drugs), and thereby, to improve preparedness. Surface temperature and humidity data from the National Climatic Data Center (Asheville, North Carolina USA) was obtained, median hourly temperature and humidity were calculated, and then mathematically stretched profiles were created to include extreme highs and lows. Profiles were created for: (1) Banda Aceh, Indonesia at the time of the 2004 Tsunami; (2) New Orleans, Louisiana USA just before and after Hurricane Katrina made landfall in 2005; (3) Springfield, Massachusetts USA for an ambulance call during the month of January 2009; (4) Port-au-Prince, Haiti following the 2010 earthquake; (5) Sendai, Japan for the March 2011 earthquake and tsunami with comparison to the colder month of January 2011; (6) New York, New York USA after Hurricane Sandy made landfall in 2012; and (7) a 24-hour rescue from Hawaii USA to the Marshall Islands. Profiles were validated by randomly selecting 10 days and determining if (1) temperature and humidity points fell inside and (2) daily variations were encompassed. Mean kinetic temperatures (MKT) were also assessed for each profile. Profiles accurately modeled conditions during emergency and disaster events and enclosed 100% of maximum and minimum temperature and humidity points. Daily variations also were represented well with 88.6% (62/70) of temperature readings and 71.1% (54/70) of relative humidity readings falling within diurnal patterns. Days not represented well primarily had continuously high humidity. Mean kinetic temperature was useful for severity ranking. Simulating temperature and humidity conditions clearly reveals operational challenges encountered during disasters and emergencies. Understanding of environmental stresses and MKT leads to insights regarding operational robustness necessary for safe and accurate use of POC devices and reagents. Rescue personnel should understand these principles before performing POC testing in adverse environments.
Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C
2011-05-01
Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.
Left atrial strain: a new parameter for assessment of left ventricular filling pressure.
Cameli, Matteo; Mandoli, Giulia Elena; Loiacono, Ferdinando; Dini, Frank Lloyd; Henein, Michael; Mondillo, Sergio
2016-01-01
In order to obtain accurate diagnosis, treatment and prognostication in many cardiac conditions, there is a need for assessment of left ventricular (LV) filling pressure. While systole depends on ejection function of LV, diastole and its disturbances influence filling function and pressures. The commonest condition that represents the latter is heart failure with preserved ejection fraction in which LV ejection is maintained, but diastole is disturbed and hence filling pressures are raised. Significant diastolic dysfunction results in raised LV end-diastolic pressure, mean left atrial (LA) pressure and pulmonary capillary wedge pressure, all referred to as LV filling pressures. Left and right heart catheterization has traditionally been used as the gold standard investigation for assessing these pressures. More recently, Doppler echocardiography has taken over such application because of its noninvasive nature and for being patient friendly. A number of indices are used to achieve accurate assessment of filling pressures including: LV pulsed-wave filling velocities (E/A ratio, E wave deceleration time), pulmonary venous flow (S wave and D wave), tissue Doppler imaging (E' wave and E/E' ratio) and LA volume index. LA longitudinal strain derived from speckle tracking echocardiography (STE) is also sensitive in estimating intracavitary pressures. It is angle-independent, thus overcomes Doppler limitations and provides highly reproducible measures of LA deformation. This review examines the application of various Doppler echocardiographic techniques in assessing LV filling pressures, in particular the emerging role of STE in assessing LA pressures in various conditions, e.g., HF, arterial hypertension and atrial fibrillation.
Efficient vibration mode analysis of aircraft with multiple external store configurations
NASA Technical Reports Server (NTRS)
Karpel, M.
1988-01-01
A coupling method for efficient vibration mode analysis of aircraft with multiple external store configurations is presented. A set of low-frequency vibration modes, including rigid-body modes, represent the aircraft. Each external store is represented by its vibration modes with clamped boundary conditions, and by its rigid-body inertial properties. The aircraft modes are obtained from a finite-element model loaded by dummy rigid external stores with fictitious masses. The coupling procedure unloads the dummy stores and loads the actual stores instead. The analytical development is presented, the effects of the fictitious mass magnitudes are discussed, and a numerical example is given for a combat aircraft with external wing stores. Comparison with vibration modes obtained by a direct (full-size) eigensolution shows very accurate coupling results. Once the aircraft and stores data bases are constructed, the computer time for analyzing any external store configuration is two to three orders of magnitude less than that of a direct solution.
NASA Astrophysics Data System (ADS)
Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao
2016-12-01
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.
Kim, Mooeung; Chung, Hoeil
2013-03-07
The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.
Hart, Dennis L; Werneke, Mark W; George, Steven Z; Matheson, James W; Wang, Ying-Chih; Cook, Karon F; Mioduski, Jerome E; Choi, Seung W
2009-08-01
Screening people for elevated levels of fear-avoidance beliefs is uncommon, but elevated levels of fear could worsen outcomes. Developing short screening tools might reduce the data collection burden and facilitate screening, which could prompt further testing or management strategy modifications to improve outcomes. The purpose of this study was to develop efficient yet accurate screening methods for identifying elevated levels of fear-avoidance beliefs regarding work or physical activities in people receiving outpatient rehabilitation. A secondary analysis of data collected prospectively from people with a variety of common neuromusculoskeletal diagnoses was conducted. Intake Fear-Avoidance Beliefs Questionnaire (FABQ) data were collected from 17,804 people who had common neuromusculoskeletal conditions and were receiving outpatient rehabilitation in 121 clinics in 26 states (in the United States). Item response theory (IRT) methods were used to analyze the FABQ data, with particular emphasis on differential item functioning among clinically logical groups of subjects, and to identify screening items. The accuracy of screening items for identifying subjects with elevated levels of fear was assessed with receiver operating characteristic analyses. Three items for fear of physical activities and 10 items for fear of work activities represented unidimensional scales with adequate IRT model fit. Differential item functioning was negligible for variables known to affect functional status outcomes: sex, age, symptom acuity, surgical history, pain intensity, condition severity, and impairment. Items that provided maximum information at the median for the FABQ scales were selected as screening items to dichotomize subjects by high versus low levels of fear. The accuracy of the screening items was supported for both scales. This study represents a retrospective analysis, which should be replicated using prospective designs. Future prospective studies should assess the reliability and validity of using one FABQ item to screen people for high levels of fear-avoidance beliefs. The lack of differential item functioning in the FABQ scales in the sample tested in this study suggested that FABQ screening could be useful in routine clinical practice and allowed the development of single-item screening for fear-avoidance beliefs that accurately identified subjects with elevated levels of fear. Because screening was accurate and efficient, single IRT-based FABQ screening items are recommended to facilitate improved evaluation and care of heterogeneous populations of people receiving outpatient rehabilitation.
Zhang, Chao; Santhanagopalan, Shriram; Sprague, Michael A.; ...
2015-08-29
The safety behavior of lithium-ion batteries under external mechanical crush is a critical concern, especially during large scale deployment. We previously presented a sequentially coupled mechanical-electrical-thermal modeling approach for studying mechanical abuse induced short circuit. Here in this work, we study different mechanical test conditions and examine the interaction between mechanical failure and electrical-thermal responses, by developing a simultaneous coupled mechanical-electrical-thermal model. The present work utilizes a single representative-sandwich (RS) to model the full pouch cell with explicit representations for each individual component such as the active material, current collector, separator, etc. Anisotropic constitutive material models are presented to describemore » the mechanical properties of active materials and separator. The model predicts accurately the force-strain response and fracture of battery structure, simulates the local failure of separator layer, and captures the onset of short circuit for lithium-ion battery cell under sphere indentation tests with three different diameters. Electrical-thermal responses to the three different indentation tests are elaborated and discussed. Lastly, numerical studies are presented to show the potential impact of test conditions on the electrical-thermal behavior of the cell after the occurrence of short circuit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Santhanagopalan, Shriram; Sprague, Michael A.
The safety behavior of lithium-ion batteries under external mechanical crush is a critical concern, especially during large scale deployment. We previously presented a sequentially coupled mechanical-electrical-thermal modeling approach for studying mechanical abuse induced short circuit. Here in this work, we study different mechanical test conditions and examine the interaction between mechanical failure and electrical-thermal responses, by developing a simultaneous coupled mechanical-electrical-thermal model. The present work utilizes a single representative-sandwich (RS) to model the full pouch cell with explicit representations for each individual component such as the active material, current collector, separator, etc. Anisotropic constitutive material models are presented to describemore » the mechanical properties of active materials and separator. The model predicts accurately the force-strain response and fracture of battery structure, simulates the local failure of separator layer, and captures the onset of short circuit for lithium-ion battery cell under sphere indentation tests with three different diameters. Electrical-thermal responses to the three different indentation tests are elaborated and discussed. Lastly, numerical studies are presented to show the potential impact of test conditions on the electrical-thermal behavior of the cell after the occurrence of short circuit.« less
Constitutive Soil Properties for Unwashed Sand and Kennedy Space Center
NASA Technical Reports Server (NTRS)
Thomas, Michael A.; Chitty, Daniel E.; Gildea, Martin L.; T'Kindt, Casey M.
2008-01-01
Accurate soil models are required for numerical simulations of land landings for the Orion Crew Exploration Vehicle. This report provides constitutive material models for one soil, unwashed sand, from NASA Langley's gantry drop test facility and three soils from Kennedy Space Center (KSC). The four soil models are based on mechanical and compressive behavior observed during geotechnical laboratory testing of remolded soil samples. The test specimens were reconstituted to measured in situ density and moisture content. Tests included: triaxial compression, hydrostatic compression, and uniaxial strain. A fit to the triaxial test results defines the strength envelope. Hydrostatic and uniaxial tests define the compressibility. The constitutive properties are presented in the format of LS-DYNA Material Model 5: Soil and Foam. However, the laboratory test data provided can be used to construct other material models. The four soil models are intended to be specific to the soil conditions discussed in the report. The unwashed sand model represents clayey sand at high density. The KSC models represent three distinct coastal sand conditions: low density dry sand, high density in-situ moisture sand, and high density flooded sand. It is possible to approximate other sands with these models, but the results would be unverified without geotechnical tests to confirm similar soil behavior.
NASA Astrophysics Data System (ADS)
Georgiadis, A.; Berg, S.; Makurat, A.; Maitland, G.; Ott, H.
2013-09-01
We investigated the cluster-size distribution of the residual nonwetting phase in a sintered glass-bead porous medium at two-phase flow conditions, by means of micro-computed-tomography (μCT) imaging with pore-scale resolution. Cluster-size distribution functions and cluster volumes were obtained by image analysis for a range of injected pore volumes under both imbibition and drainage conditions; the field of view was larger than the porosity-based representative elementary volume (REV). We did not attempt to make a definition for a two-phase REV but used the nonwetting-phase cluster-size distribution as an indicator. Most of the nonwetting-phase total volume was found to be contained in clusters that were one to two orders of magnitude larger than the porosity-based REV. The largest observed clusters in fact ranged in volume from 65% to 99% of the entire nonwetting phase in the field of view. As a consequence, the largest clusters observed were statistically not represented and were found to be smaller than the estimated maximum cluster length. The results indicate that the two-phase REV is larger than the field of view attainable by μCT scanning, at a resolution which allows for the accurate determination of cluster connectivity.
Mass Transport in the Warm, Dense Matter and High-Energy Density Regimes
NASA Astrophysics Data System (ADS)
Kress, J. D.; Burakovsky, L.; Ticknor, C.; Collins, L. A.; Lambert, F.
2011-10-01
Large-scale hydrodynamical simulations of fluids and plasmas under extreme conditions require knowledge of certain microscopic properties such as diffusion and viscosity in addition to the equation-of-state. To determine these dynamical properties, we employ quantum molecular dynamical (MD) simulations on large samples of atoms. The method has several advantages: 1) static, dynamical, and optical properties are produced consistently from the same simulations, and 2) mixture properties arise in a natural way since all intra- and inter-particle interactions are properly represented. We utilize two forms of density functional theory: 1) Kohn-Sham (KS-DFT) and 2) orbital-free (OF-DFT). KS-DFT is computationally intense due to its reliance on an orbital representation. As the temperature rises, the Thomas-Fermi approximation in OF-DFT begins to represent accurately the density functional, and provides an efficient and systematic means for extending the quantum simulations to very hot conditions. We have performed KS-DFT and OF-DFT calculations of the self-diffusion, mutual diffusion and shear viscosity for Al, Li, H, and LiH. We examine trends in these quantities and compare to more approximate forms such as the one-component plasma model. We also determine the validity of mixing rules that combine the properties of pure species into a composite result.
NASA Astrophysics Data System (ADS)
Ji, Songsong; Yang, Yibo; Pang, Gang; Antoine, Xavier
2018-01-01
The aim of this paper is to design some accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations in rectangular domains. The Laplace transform in time and discrete Fourier transform in space are applied to get Green's functions of the semi-discretized equations in unbounded domains with single-source. An algorithm is given to compute these Green's functions accurately through some recurrence relations. Furthermore, the finite-difference method is used to discretize the reduced problem with accurate boundary conditions. Numerical simulations are presented to illustrate the accuracy of our method in the case of the linear Schrödinger and heat equations. It is shown that the reflection at the corners is correctly eliminated.
20 Meter Solar Sail Analysis and Correlation
NASA Technical Reports Server (NTRS)
Taleghani, B.; Lively, P.; Banik, J.; Murphy, D.; Trautt, T.
2005-01-01
This presentation discusses studies conducted to determine the element type and size that best represents a 20-meter solar sail under ground-test load conditions, the performance of test/Analysis correlation by using Static Shape Optimization Method for Q4 sail, and system dynamic. TRIA3 elements better represent wrinkle patterns than do QUAD3 elements Baseline, ten-inch elements are small enough to accurately represent sail shape, and baseline TRIA3 mesh requires a reasonable computation time of 8 min. 21 sec. In the test/analysis correlation by using Static shape optimization method for Q4 sail, ten parameters were chosen and varied during optimization. 300 sail models were created with random parameters. A response surfaces for each targets which were created based on the varied parameters. Parameters were optimized based on response surface. Deflection shape comparison for 0 and 22.5 degrees yielded a 4.3% and 2.1% error respectively. For the system dynamic study testing was done on the booms without the sails attached. The nominal boom properties produced a good correlation to test data the frequencies were within 10%. Boom dominated analysis frequencies and modes compared well with the test results.
Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations
NASA Technical Reports Server (NTRS)
Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick
2017-01-01
Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.
Colour analysis and verification of CCTV images under different lighting conditions
NASA Astrophysics Data System (ADS)
Smith, R. A.; MacLennan-Brown, K.; Tighe, J. F.; Cohen, N.; Triantaphillidou, S.; MacDonald, L. W.
2008-01-01
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging chain to extract accurate colour information from CCTV recordings. A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and display. The response of each of these stages to colour scene information was characterised by measuring its response to a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV applications, common compression schemes and representative displays were also characterised.
NASA Technical Reports Server (NTRS)
Jewett, M. E.; Duffy, J. F.; Czeisler, C. A.
2000-01-01
A double-stimulus experiment was conducted to evaluate the phase of the underlying circadian clock following light-induced phase shifts of the human circadian system. Circadian phase was assayed by constant routine from the rhythm in core body temperature before and after a three-cycle bright-light stimulus applied near the estimated minimum of the core body temperature rhythm. An identical, consecutive three-cycle light stimulus was then applied, and phase was reassessed. Phase shifts to these consecutive stimuli were no different from those obtained in a previous study following light stimuli applied under steady-state conditions over a range of circadian phases similar to those at which the consecutive stimuli were applied. These data suggest that circadian phase shifts of the core body temperature rhythm in response to a three-cycle stimulus occur within 24 h following the end of the 3-day light stimulus and that this poststimulus temperature rhythm accurately reflects the timing of the underlying circadian clock.
Ocean color remote sensing using polarization properties of reflected sunlight
NASA Technical Reports Server (NTRS)
Frouin, R.; Pouliquen, E.; Breon, F.-M.
1994-01-01
The effects of the atmosphere and surface on sunlight backscattered to space by the ocean may be substantially reduced by using the unpolarized component of reflectance instead of total reflectance. At 450 nm, a wavelength of interest in ocean color remote sensing, and for typical conditions, 45% of the unpolarized reflectance may originate from the water body instead of 20% of the total reflectance, which represents a gain of a factor 2.2 in useful signal for water composition retrieval. The best viewing geometries are adjacent to the glitter region; they correspond to scattering angles around 100 deg, but they may change slightly depending on the polarization characteristics of the aerosols. As aerosol optical thickness increases, the atmosphere becomes less efficient at polarizing sunlight, and the enhancement of the water body contribution to unpolarized reflectance is reduced. Since the perturbing effects are smaller on unpolarized reflectance, at least for some viewing geometries, they may be more easily corrected, leading to a more accurate water-leaving signal and, therefore, more accurate estimates of phytoplankton pigment concentration.
Capabilities of current wildfire models when simulating topographical flow
NASA Astrophysics Data System (ADS)
Kochanski, A.; Jenkins, M.; Krueger, S. K.; McDermott, R.; Mell, W.
2009-12-01
Accurate predictions of the growth, spread and suppression of wild fires rely heavily on the correct prediction of the local wind conditions and the interactions between the fire and the local ambient airflow. Resolving local flows, often strongly affected by topographical features like hills, canyons and ridges, is a prerequisite for accurate simulation and prediction of fire behaviors. In this study, we present the results of high-resolution numerical simulations of the flow over a smooth hill, performed using (1) the NIST WFDS (WUI or Wildland-Urban-Interface version of the FDS or Fire Dynamic Simulator), and (2) the LES version of the NCAR Weather Research and Forecasting (WRF-LES) model. The WFDS model is in the initial stages of development for application to wind flow and fire spread over complex terrain. The focus of the talk is to assess how well simple topographical flow is represented by WRF-LES and the current version of WFDS. If sufficient progress has been made prior to the meeting then the importance of the discrepancies between the predicted and measured winds, in terms of simulated fire behavior, will be examined.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
Blanchette, Isabelle; Marzouki, Yousri; Claidière, Nicolas; Gullstrand, Julie; Fagot, Joël
2017-01-01
It is well established that emotion and cognition interact in humans, but such an interaction has not been extensively studied in nonhuman primates. We investigated whether emotional value can affect nonhuman primates' processing of stimuli that are only mentally represented, not visually available. In a short-term memory task, baboons memorized the location of two target squares of the same color, which were presented with a distractor of a different color. Through prior long-term conditioning, one of the two colors had acquired a negative valence. Subjects were slower and less accurate on the memory task when the targets were negative than when they were neutral. In contrast, subjects were faster and more accurate when the distractors were negative than when they were neutral. Some of these effects were modulated by individual differences in emotional disposition. Overall, the results reveal a pattern of cognitive avoidance of negative stimuli, and show that emotional value alters cognitive processing in baboons even when the stimuli are not physically present. This suggests that emotional influences on cognition are deeply rooted in evolutionary continuity.
The need of operational paradigms for frailty in older persons: the SPRINTT project.
Cesari, Matteo; Marzetti, Emanuele; Calvani, Riccardo; Vellas, Bruno; Bernabei, Roberto; Bordes, Philippe; Roubenoff, Ronenn; Landi, Francesco; Cherubini, Antonio
2017-02-01
The exploration of frailty as a pre-disability geriatric condition represents one of the most promising research arenas of modern medicine. Frailty is today indicated as a paradigmatic condition around which the traditional healthcare systems might be re-shaped and optimized in order to address the complexities and peculiarities of elders. Unfortunately, the lack of consensus around a single operational definition has limited the clinical implementation of frailty in clinical practice. In these last years, growing attention (even beyond the traditional boundaries of geriatric medicine) has been given to physical performance measures. These instruments have shown to be predictive of negative health-related events and able to support an accurate estimation of the "biological age" in late life. The strong construct of physical performance measures also makes them particularly suitable for the assessment of the frailty status. Furthermore, the adoption of physical performance measures may help render the frailty condition more organ-specific (i.e., centred on the skeletal muscle quality) and less heterogeneous than currently perceived. The translation of the frailty concept by means of physical performance measures implicitly represents an attempt to go beyond traditional paradigms. In this context, the recently funded "Sarcopenia and Physical fRailty IN older people: multi-componenT Treatment strategies" (SPRINTT) project (largely based on such a novel approach) may indeed fill an important gap in the field and provide key insights for counteracting the disabling cascade in the elderly.
ACQUISITION OF REPRESENTATIVE GROUND WATER QUALITY SAMPLES FOR METALS
R.S. Kerr Environmental Research Laboratory (RSKERL) personnel have evaluated sampling procedures for the collection of representative, accurate, and reproducible ground water quality samples for metals for the past four years. Intensive sampling research at three different field...
L4 Milestone Report for MixEOS 2016 experiments and simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, Eric Nicholas; Bradley, Paul Andrew; Merritt, Elizabeth Catherine
2016-08-01
Accurate simulations of fluid and plasma flows require accurate thermodynamic properties of the fluids or plasmas. This thermodynamic information is represented by the equations of state of the materials. For pure materials, the equations of state may be represented by analytical models for idealized circumstances, or by tabular means, such as the Sesame tables. However, when a computational cell has a mixture of two or more fluids, the equations of state are not well understood, particularly under the circumstances of high energy densities. This is a particularly difficult issue for Eulerian codes, wherein mixed cells arise simply due to themore » advection process. LANL Eulerian codes typically assume an “Amagat’s Law” (or Law of Partial Volumes) for the mixture in which the pressures and temperatures of fluids are at an equilibrium that is consistent with the fluids being segregated within the cell. However, for purposes of computing other EOS properties, e.g., bulk modulus, or sound speed, the fluids are considered to be fully “mixed”. LANL has also been investigating implementing instead “Dalton’s Law” in which the total pressure is considered to be the sum of the partial pressures within the cell. For ideal gases, these two laws give the same result. Other possibilities are nonpressure- temperature-equilibrated approaches in which the two fluids are not assumed to “mix” at all, and the EOS properties of the cell are computed from, say, volume-weighted averages of the individual fluid properties. The assumption of the EOS properties within a mixed cell can have a pronounced effect on the behavior of the cell, resulting in, for example, different shock speeds, pressures, temperatures and densities within the cell. There is no apparent consensus as to which approach is best under HED conditions, though we note that under typical atmospheric and near atmospheric conditions the differences may be slight.« less
Foresight begins with FMEA. Delivering accurate risk assessments.
Passey, R D
1999-03-01
If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.
Discrete-vortex model for the symmetric-vortex flow on cones
NASA Technical Reports Server (NTRS)
Gainer, Thomas G.
1990-01-01
A relatively simple but accurate potential flow model was developed for studying the symmetric vortex flow on cones. The model is a modified version of the model first developed by Bryson, in which discrete vortices and straight-line feeding sheets were used to represent the flow field. It differs, however, in the zero-force condition used to position the vortices and determine their circulation strengths. The Bryson model imposed the condition that the net force on the feeding sheets and discrete vortices must be zero. The proposed model satisfies this zero-force condition by having the vortices move as free vortices, at a velocity equal to at the local crossflow velocity at their centers. When the free-vortex assumption is made, a solution is obtained in the form of two nonlinear algebraic equations that relate the vortex center coordinates and vortex strengths to the cone angle and angle of attack. The vortex center locations calculated using the model are in good agreement with experimental values. The cone normal forces as well as center locations are in good agreement with the vortex cloud method of calculating symmetric flow fields.
NASA Astrophysics Data System (ADS)
Wittmann, René; Maggi, C.; Sharma, A.; Scacchi, A.; Brader, J. M.; Marini Bettolo Marconi, U.
2017-11-01
The equations of motion of active systems can be modeled in terms of Ornstein-Uhlenbeck processes (OUPs) with appropriate correlators. For further theoretical studies, these should be approximated to yield a Markovian picture for the dynamics and a simplified steady-state condition. We perform a comparative study of the unified colored noise approximation (UCNA) and the approximation scheme by Fox recently employed within this context. We review the approximations necessary to define effective interaction potentials in the low-density limit and study the conditions for which these represent the behavior observed in two-body simulations for the OUPs model and active Brownian particles. The demonstrated limitations of the theory for potentials with a negative slope or curvature can be qualitatively corrected by a new empirical modification. In general, we find that in the presence of translational white noise the Fox approach is more accurate. Finally, we examine an alternative way to define a force-balance condition in the limit of small activity.
Radionuclide Retention in Concrete Wasteforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wellman, Dawn M.; Jansik, Danielle P.; Golovich, Elizabeth C.
2012-09-24
Assessing long-term performance of Category 3 waste cement grouts for radionuclide encasement requires knowledge of the radionuclide-cement interactions and mechanisms of retention (i.e., sorption or precipitation); the mechanism of contaminant release; the significance of contaminant release pathways; how wasteform performance is affected by the full range of environmental conditions within the disposal facility; the process of wasteform aging under conditions that are representative of processes occurring in response to changing environmental conditions within the disposal facility; the effect of wasteform aging on chemical, physical, and radiological properties; and the associated impact on contaminant release. This knowledge will enable accurate predictionmore » of radionuclide fate when the wasteforms come in contact with groundwater. Data collected throughout the course of this work will be used to quantify the efficacy of concrete wasteforms, similar to those used in the disposal of LLW and MLLW, for the immobilization of key radionuclides (i.e., uranium, technetium, and iodine). Data collected will also be used to quantify the physical and chemical properties of the concrete affecting radionuclide retention.« less
NASA Technical Reports Server (NTRS)
Kuhlman, E. A.; Baranowski, L. C.
1977-01-01
The effects of the Thermal Protection Subsystem (TPS) contamination on the space shuttle orbiter S band quad antenna due to multiple mission buildup are discussed. A test fixture was designed, fabricated and exposed to ten cycles of simulated ground and flight environments. Radiation pattern and impedance tests were performed to measure the effects of the contaminates. The degradation in antenna performance was attributed to the silicone waterproofing in the TPS tiles rather than exposure to the contaminating sources used in the test program. Validation of the accuracy of an analytical thermal model is discussed. Thermal vacuum tests with a test fixture and a representative S band quad antenna were conducted to evaluate the predictions of the analytical thermal model for two orbital heating conditions and entry from each orbit. The results show that the accuracy of predicting the test fixture thermal responses is largely dependent on the ability to define the boundary and ambient conditions. When the test conditions were accurately included in the analytical model, the predictions were in excellent agreement with measurements.
Tree structure and cavity microclimate: implications for bats and birds.
Clement, Matthew J; Castleberry, Steven B
2013-05-01
It is widely assumed that tree cavity structure and microclimate affect cavity selection and use in cavity-dwelling bats and birds. Despite the interest in tree structure and microclimate, the relationship between the two has rarely been quantified. Currently available data often comes from artificial structures that may not accurately represent conditions in natural cavities. We collected data on tree cavity structure and microclimate from 45 trees in five cypress-gum swamps in the Coastal Plain of Georgia in the United States in 2008. We used hierarchical linear models to predict cavity microclimate from tree structure and ambient temperature and humidity, and used Aikaike's information criterion to select the most parsimonious models. We found large differences in microclimate among trees, but tree structure variables explained <28% of the variation, while ambient conditions explained >80% of variation common to all trees. We argue that the determinants of microclimate are complex and multidimensional, and therefore cavity microclimate cannot be deduced easily from simple tree structures. Furthermore, we found that daily fluctuations in ambient conditions strongly affect microclimate, indicating that greater weather fluctuations will cause greater differences among tree cavities.
Structural stability of DNA origami nanostructures in the presence of chaotropic agents
NASA Astrophysics Data System (ADS)
Ramakrishnan, Saminathan; Krainer, Georg; Grundmeier, Guido; Schlierf, Michael; Keller, Adrian
2016-05-01
DNA origami represent powerful platforms for single-molecule investigations of biomolecular processes. The required structural integrity of the DNA origami may, however, pose significant limitations regarding their applicability, for instance in protein folding studies that require strongly denaturing conditions. Here, we therefore report a detailed study on the stability of 2D DNA origami triangles in the presence of the strong chaotropic denaturing agents urea and guanidinium chloride (GdmCl) and its dependence on concentration and temperature. At room temperature, the DNA origami triangles are stable up to at least 24 h in both denaturants at concentrations as high as 6 M. At elevated temperatures, however, structural stability is governed by variations in the melting temperature of the individual staple strands. Therefore, the global melting temperature of the DNA origami does not represent an accurate measure of their structural stability. Although GdmCl has a stronger effect on the global melting temperature, its attack results in less structural damage than observed for urea under equivalent conditions. This enhanced structural stability most likely originates from the ionic nature of GdmCl. By rational design of the arrangement and lengths of the individual staple strands used for the folding of a particular shape, however, the structural stability of DNA origami may be enhanced even further to meet individual experimental requirements. Overall, their high stability renders DNA origami promising platforms for biomolecular studies in the presence of chaotropic agents, including single-molecule protein folding or structural switching.DNA origami represent powerful platforms for single-molecule investigations of biomolecular processes. The required structural integrity of the DNA origami may, however, pose significant limitations regarding their applicability, for instance in protein folding studies that require strongly denaturing conditions. Here, we therefore report a detailed study on the stability of 2D DNA origami triangles in the presence of the strong chaotropic denaturing agents urea and guanidinium chloride (GdmCl) and its dependence on concentration and temperature. At room temperature, the DNA origami triangles are stable up to at least 24 h in both denaturants at concentrations as high as 6 M. At elevated temperatures, however, structural stability is governed by variations in the melting temperature of the individual staple strands. Therefore, the global melting temperature of the DNA origami does not represent an accurate measure of their structural stability. Although GdmCl has a stronger effect on the global melting temperature, its attack results in less structural damage than observed for urea under equivalent conditions. This enhanced structural stability most likely originates from the ionic nature of GdmCl. By rational design of the arrangement and lengths of the individual staple strands used for the folding of a particular shape, however, the structural stability of DNA origami may be enhanced even further to meet individual experimental requirements. Overall, their high stability renders DNA origami promising platforms for biomolecular studies in the presence of chaotropic agents, including single-molecule protein folding or structural switching. Electronic supplementary information (ESI) available: Melting curves without baseline subtraction, AFM images of DNA origami after 24 h incubation, calculated melting temperatures of all staple strands. See DOI: 10.1039/c6nr00835f
NASA Astrophysics Data System (ADS)
Haegon, Lee; Joonsang, Lee
2017-11-01
In many multi-phase fluidic systems, there are essentially contact interfaces including liquid-vapor, liquid-solid, and solid-vapor phase. There is also a contact line where these three interfaces meet. The existence of these interfaces and contact lines has a considerable impact on the nanoscale droplet wetting behavior. However, recent studies have shown that Young's equation does not accurately represent this behavior at the nanoscale. It also emphasized the importance of the contact line effect.Therefore, We performed molecular dynamics simulation to imitate the behavior of nanoscale droplets with solid temperature condition. And we find the effect of solid temperature on the contact line motion. Furthermore, We figure out the effect of contact line force on the wetting behavior of droplet according to the different solid temperature condition. With solid temperature condition variation, the magnitude of contact line friction decreases significantly. We also divide contact line force by effect of bulk liquid, interfacial tension, and solid surface. This work was also supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No. 2015R1A5A1037668) and BrainKorea21plus.
Allergenic potential of novel proteins - What can we learn from animal production?
Ekmay, Ricardo D; Coon, Craig N; Ladics, Gregory S; Herman, Rod A
2017-10-01
Currently, risk assessment of the allergenic potential of novel proteins relies heavily on evaluating protein digestibility under normal conditions based on the theory that allergens are more resistant to gastrointestinal digestion than non-allergens. There is also proposed guidance for expanded in vitro digestibility assay conditions to include vulnerable sub-populations. One of the underlying rationales for the expanded guidance is that current in vitro assays do not accurately replicate the range of physiological conditions. Animal scientists have long sought to predict protein and amino acid digestibility for precision nutrition. Monogastric production animals, especially swine, have gastrointestinal systems similar to humans, and evaluating potential allergen digestibility in this context may be beneficial. Currently, there is no compelling evidence that the mechanisms sometimes postulated to be associated with allergenic sensitization, e.g. antacid modification of stomach pH, are valid among production animals. Furthermore, examples are provided where non-biologically representative assays are better at predicting protein and amino acid digestibility compared with those designed to mimic in vivo conditions. Greater emphasis should be made to align in vitro assessments with in vivo data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Automated classification of articular cartilage surfaces based on surface texture.
Stachowiak, G P; Stachowiak, G W; Podsiadlo, P
2006-11-01
In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.
Atmospheric conditions measured by a wireless sensor network on the local scale
NASA Astrophysics Data System (ADS)
Lengfeld, K.; Ament, F.
2010-09-01
Atmospheric conditions close to the surface, like temperature, wind speed and humidity, vary on small scales because of surface heterogeneities. Therefore, the traditional measuring approach of using a single, highly accurate station is of limited representativeness for a larger domain, because it is not able to determine these small scale variabilities. However, both the variability and the domain averages are important information for the development and validation of atmospheric models and soil-vegetation-atmosphere-transfer (SVAT) schemes. Due to progress in microelectronics it is possible to construct networks of comparably cheap meteorological stations with moderate accuracy. Such a network provides data in high spatial and temporal resolution. The EPFL Lausanne developed such a network called SensorScope, consisting of low cost autonomous stations. Each station observes air and surface temperature, humidity, wind direction and speed, incoming solar radiation, precipitation, soil moisture and soil temperature and sends the data via radio communication to a base station. This base station forwards the collected data via GSM/GPRS to a central server. The first measuring campaign took place within the FLUXPAT project in August 2009. We deployed 15 stations as a twin transect near Jülich, Germany. To test the quality of the low cost sensors we compared two of them to more accurate reference systems. It turned out, that although the network sensors are not highly accurate, the measurements are consistent. Consequently an analysis of the pattern of atmospheric conditions is feasible. The transect is 2.3 km long and covers different types of vegetation and a small river. Therefore, we analyse the influence of different land surfaces and the distance to the river on meteorological conditions. For example, we found a difference in air temperature of 0.8°C between the station closest to and the station farthest from the river. The decreasing relative humidity with increasing distance to the river meets our expectations. But there are also some unexpected anomalies in the air temperature, which will be discussed in detail by selected case studies. By analysing the correlation of the fluctuation of the meteorological conditions, we want to detect clusters depending on different land surfaces and distance to the river. Since April 2010 a second deployment is set up at the Airport Hamburg. It consists of 14 stations placed along the two runways in northward and in eastward direction. The aim of this project is to analyse whether the atmospheric conditions in such an uniform environment are really homogeneous. To do so we will apply the same analyses for these measurements we used for FLUXPAT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Truster, T. J.; Cochran, K. B.
Advanced reactors designed to operate at higher temperatures than current light water reactors require structural materials with high creep strength and creep-fatigue resistance to achieve long design lives. Grade 91 is a ferritic/martensitic steel designed for long creep life at elevated temperatures. It has been selected as a candidate material for sodium fast reactor intermediate heat exchangers and other advanced reactor structural components. This report focuses on the creep deformation and rupture life of Grade 91 steel. The time required to complete an experiment limits the availability of long-life creep data for Grade 91 and other structural materials. Design methodsmore » often extrapolate the available shorter-term experimental data to longer design lives. However, extrapolation methods tacitly assume the underlying material mechanisms causing creep for long-life/low-stress conditions are the same as the mechanisms controlling creep in the short-life/high-stress experiments. A change in mechanism for long-term creep could cause design methods based on extrapolation to be non-conservative. The goal for physically-based microstructural models is to accurately predict material response in experimentally-inaccessible regions of design space. An accurate physically-based model for creep represents all the material mechanisms that contribute to creep deformation and damage and predicts the relative influence of each mechanism, which changes with loading conditions. Ideally, the individual mechanism models adhere to the material physics and not an empirical calibration to experimental data and so the model remains predictive for a wider range of loading conditions. This report describes such a physically-based microstructural model for Grade 91 at 600° C. The model explicitly represents competing dislocation and diffusional mechanisms in both the grain bulk and grain boundaries. The model accurately recovers the available experimental creep curves at higher stresses and the limited experimental data at lower stresses, predominately primary creep rates. The current model considers only one temperature. However, because the model parameters are, for the most part, directly related to the physics of fundamental material processes, the temperature dependence of the properties are known. Therefore, temperature dependence can be included in the model with limited additional effort. The model predicts a mechanism shift for 600° C at approximately 100 MPa from a dislocation- dominated regime at higher stress to a diffusion-dominated regime at lower stress. This mechanism shift impacts the creep life, notch-sensitivity, and, likely, creep ductility of Grade 91. In particular, the model predicts existing extrapolation methods for creep life may be non-conservative when attempting to extrapolate data for higher stress creep tests to low stress, long-life conditions. Furthermore, the model predicts a transition from notchstrengthening behavior at high stress to notch-weakening behavior at lower stresses. Both behaviors may affect the conservatism of existing design methods.« less
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
2012-01-01
Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658
2016-05-25
2016 Lincoln Laboratory demonstrates highly accurate vehicle localization under adverse weather conditions A ground-penetrating radar system...the problems limiting the development and adoption of self-driving vehicles: how can a vehicle navigate to stay within its lane when bad weather ... weather conditions, but it is challenging, even impossible, for them to work when snow covers the markings and surfaces or precipitation obscures points
40 CFR 97.51 - Establishment of accounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... belief true, accurate, and complete. I am aware that there are significant penalties for submitting false...) of this section. (3) Changing NO X authorized account representative and alternate NO X authorized account representative; changes in persons with ownership interest. (i) The NOX authorized account...
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
Regional analysis of drought and heat impacts on forests: current and future science directions.
Law, Beverly E
2014-12-01
Accurate assessments of forest response to current and future climate and human actions are needed at regional scales. Predicting future impacts on forests will require improved analysis of species-level adaptation, resilience, and vulnerability to mortality. Land system models can be enhanced by creating trait-based groupings of species that better represent climate sensitivity, such as risk of hydraulic failure from drought. This emphasizes the need for more coordinated in situ and remote sensing observations to track changes in ecosystem function, and to improve model inputs, spatio-temporal diagnosis, and predictions of future conditions, including implications of actions to mitigate climate change. © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Seismo-acoustic ray model benchmarking against experimental tank data.
Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo
2012-08-01
Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.
Nordahl-Hansen, Anders; Tøndevold, Magnus; Fletcher-Watson, Sue
2018-04-01
Portrayals of characters with autism spectrum disorders (ASD) in films and TV series are subject to intense debate over whether such representations are accurate. Inaccurate portrayals are a concern as they may lead to increased stereotypes toward the condition. We investigate whether portrayals of characters with autism spectrum disorder in film and TV-series align with DSM-5 diagnostic criteria. Our data show that characters present a full range of characteristics described in the DSM-5. The meaning of this finding is discussed in relation to potential educational value of on screen portrayals and the notion of authenticity in representing the autistic experience. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization-Based Calibration of FAST.Farm Parameters Against SOWFA: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreira, Paula D; Annoni, Jennifer; Jonkman, Jason
2018-01-04
FAST.Farm is a medium-delity wind farm modeling tool that can be used to assess power and loads contributions of wind turbines in a wind farm. The objective of this paper is to undertake a calibration procedure to set the user parameters of FAST.Farm to accurately represent results from large-eddy simulations. The results provide an in- depth analysis of the comparison of FAST.Farm and large-eddy simulations before and after calibration. The comparison of FAST.Farm and large-eddy simulation results are presented with respect to streamwise and radial velocity components as well as wake-meandering statistics (mean and standard deviation) in the lateral andmore » vertical directions under different atmospheric and turbine operating conditions.« less
Characterisation of spectrophotometers used for spectral solar ultraviolet radiation measurements.
Gröbner, J
2001-01-01
Spectrophotometers used for spectral measurements of the solar ultraviolet radiation need to be well characterised to provide accurate and reliable data. Since the characterisation and calibration are usually performed in the laboratory under conditions very different from those encountered during solar measurements, it is essential to address all issues concerned with the representativity of the laboratory characterisation with respect to the solar measurements. These include among others the instrument stability, the instrument linearity, the instrument responsivity, the wavelength accuracy, the spectral resolution, stray light rejection and the instrument dependence on ambient temperature fluctuations. These instrument parameters need to be determined often enough so that the instrument changes only marginally in the period between successive characterisations and therefore provides reliable data for the intervening period.
Flexural Properties of Eastern Hardwood Pallet Parts
John A. McLeod; Marshall S. White; Paul A. Ifju; Philip A. Araman
1991-01-01
Accurate estimates of the flexural properties of pallet parts are critical to the safe, yet efficient, design of wood pallets. To develop more accurate data for hardwood pallet parts, 840 stringers and 2,520 deckboards, representing 14 hardwood species, were sampled from 35 mills distributed throughout the Eastern United States. The parts were sorted by species,...
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
USDA-ARS?s Scientific Manuscript database
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
ERIC Educational Resources Information Center
Szekely, George
2009-01-01
Magic shows represent an authentic children's play turned into an art lesson. School art should accurately represent the unique qualities of children's art, incorporating children's experiences as young artists. An art lesson in school should not look and feel entirely different than art made outside of school. Unlike specialized adult art…
NASA Astrophysics Data System (ADS)
Ding, Zhe; Li, Li; Hu, Yujin
2018-01-01
Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression
Nguyen, Nha; Vo, An; Choi, Inchan
2015-01-01
Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910
NASA Astrophysics Data System (ADS)
Farinha, Maria Luísa Braga; Azevedo, Nuno Monteiro; Candeias, Mariline
2017-02-01
The explicit formulation of a small displacement model for the coupled hydro-mechanical analysis of concrete gravity dam foundations based on joint finite elements is presented. The proposed coupled model requires a thorough pre-processing stage in order to ensure that the interaction between the various blocks which represent both the rock mass foundation and the dam is always edge to edge. The mechanical part of the model, though limited to small displacements, has the advantage of allowing an accurate representation of the stress distribution along the interfaces, such as rock mass joints. The hydraulic part and the mechanical part of the model are fully compatible. The coupled model is validated using a real case of a dam in operation, by comparison of the results with those obtained with a large displacement discrete model. It is shown that it is possible to assess the sliding stability of concrete gravity dams using small displacement models under both static and dynamic conditions.
Modeling and simulation of high-speed wake flows
NASA Astrophysics Data System (ADS)
Barnhardt, Michael Daniel
High-speed, unsteady flows represent a unique challenge in computational hypersonics research. They are found in nearly all applications of interest, including the wakes of reentry vehicles, RCS jet interactions, and scramjet combustors. In each of these examples, accurate modeling of the flow dynamics plays a critical role in design performance. Nevertheless, literature surveys reveal that very little modern research effort has been made toward understanding these problems. The objective of this work is to synthesize current computational methods for high-speed flows with ideas commonly used to model low-speed, turbulent flows in order to create a framework by which we may reliably predict unsteady, hypersonic flows. In particular, we wish to validate the new methodology for the case of a turbulent wake flow at reentry conditions. Currently, heat shield designs incur significant mass penalties due to the large margins applied to vehicle afterbodies in lieu of a thorough understanding of the wake aerothermodynamics. Comprehensive validation studies are required to accurately quantify these modeling uncertainties. To this end, we select three candidate experiments against which we evaluate the accuracy of our methodology. The first set of experiments concern the Mars Science Laboratory (MSL) parachute system and serve to demonstrate that our implementation produces results consistent with prior studies at supersonic conditions. Second, we use the Reentry-F flight test to expand the application envelope to realistic flight conditions. Finally, in the last set of experiments, we examine a spherical capsule wind tunnel configuration in order to perform a more detailed analysis of a realistic flight geometry. In each case, we find that current 1st order in time, 2nd order in space upwind numerical methods are sufficiently accurate to predict statistical measurements: mean, RMS, standard deviation, and so forth. Further potential gains in numerical accuracy are demonstrated using a new class of flux evaluation schemes in combination with 2nd order dual-time stepping. For cases with transitional or turbulent Reynolds numbers, we show that the detached eddy simulation (DES) method holds clear advantage over heritage RANS methods. From this, we conclude that the current methodology is sufficient to predict heating of external, reentry-type applications within experimental uncertainty.
NASA Astrophysics Data System (ADS)
Darr, Samuel Ryan
Technologies that enable the storage and transfer of cryogenic propellants in space will be needed for the next generation vehicles that will carry humans to Mars. One of the candidate technologies is the screen channel liquid acquisition device (LAD), which uses a metal woven wire mesh to separate the liquid and vapor phases so that single-phase liquid propellant can be transferred in microgravity. The purpose of this work is to provide an accurate hydrodynamic model of the liquid flow through a screen channel LAD. Chapter 2 provides a derivation of the flow-through-screen (FTS) boundary condition. The final boundary condition more accurately represents the complex geometry of metal woven wire mesh than the current model used in the literature. The effect of thermal contraction on the screen geometry due to large temperature changes common in cryogenic systems is quantified in this chapter as well. Chapter 3 provides a two-dimensional (2-D) analytical solution of the velocity and pressure fields in a screen channel LAD. This solution, which accounts for non-uniform injection through the screen, is compared with the traditional 1-D model which assumes a constant, uniform injection velocity. Chapter 4 describes the setup and results of an experiment that measures both the velocity and pressure fields in a screen channel LAD in order to validate the 2-D model. Results show that the 2-D model performs best against the new data and historical data. With the improved FTS boundary condition and the 2-D model, the pressure drop of a screen channel LAD is described with excellent accuracy. The result of this work is a predictive tool that will instill confidence in the design of screen channel LADs for future in-space propulsion systems.
Aeromechanics Analysis of a Distortion-Tolerant Fan with Boundary Layer Ingestion
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Reddy, T. S. R.; Coroneos, Rula M.; Min, James B.; Provenza, Andrew J.; Duffy, Kirsten P.; Stefko, George L.; Heinlein, Gregory S.
2018-01-01
A propulsion system with Boundary Layer Ingestion (BLI) has the potential to significantly reduce aircraft engine fuel burn. But a critical challenge is to design a fan that can operate continuously with a persistent BLI distortion without aeromechanical failure -- flutter or high cycle fatigue due to forced response. High-fidelity computational aeromechanics analysis can be very valuable to support the design of a fan that has satisfactory aeromechanic characteristics and good aerodynamic performance and operability. Detailed aeromechanics analyses together with careful monitoring of the test article is necessary to avoid unexpected problems or failures during testing. In the present work, an aeromechanics analysis based on a three-dimensional, time-accurate, Reynolds-averaged Navier-Stokes computational fluid dynamics code is used to study the performance and aeromechanical characteristics of the fan in both circumferentially-uniform and circumferentially-varying distorted flows. Pre-test aeromechanics analyses are used to prepare for the wind tunnel test and comparisons are made with measured blade vibration data after the test. The analysis shows that the fan has low levels of aerodynamic damping at various operating conditions examined. In the test, the fan remained free of flutter except at one near-stall operating condition. Analysis could not be performed at this low mass flow rate operating condition since it fell beyond the limit of numerical stability of the analysis code. The measured resonant forced response at a specific low-response crossing indicated that the analysis under-predicted this response and work is in progress to understand possible sources of differences and to analyze other larger resonant responses. Follow-on work is also planned with a coupled inlet-fan aeromechanics analysis that will more accurately represent the interactions between the fan and BLI distortion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au; Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk; Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au
This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation frommore » moving bodies and flow induced noise generation without any penalty in allowable time step.« less
2014-01-01
Background Anonymous survey methods appear to promote greater disclosure of sensitive or stigmatizing information compared to non-anonymous methods. Higher disclosure rates have traditionally been interpreted as being more accurate than lower rates. We examined the impact of 3 increasingly private mailed survey conditions—ranging from potentially identifiable to completely anonymous—on survey response and on respondents’ representativeness of the underlying sampling frame, completeness in answering sensitive survey items, and disclosure of sensitive information. We also examined the impact of 2 incentives ($10 versus $20) on these outcomes. Methods A 3X2 factorial, randomized controlled trial of 324 representatively selected, male Gulf War I era veterans who had applied for United States Department of Veterans Affairs (VA) disability benefits. Men were asked about past sexual assault experiences, childhood abuse, combat, other traumas, mental health symptoms, and sexual orientation. We used a novel technique, the pre-merged questionnaire, to link anonymous responses to administrative data. Results Response rates ranged from 56.0% to 63.3% across privacy conditions (p = 0.49) and from 52.8% to 68.1% across incentives (p = 0.007). Respondents’ characteristics differed by privacy and by incentive assignments, with completely anonymous respondents and $20 respondents appearing least different from their non-respondent counterparts. Survey completeness did not differ by privacy or by incentive. No clear pattern of disclosing sensitive information by privacy condition or by incentive emerged. For example, although all respondents came from the same sampling frame, estimates of sexual abuse ranged from 13.6% to 33.3% across privacy conditions, with the highest estimate coming from the intermediate privacy condition (p = 0.007). Conclusion Greater privacy and larger incentives do not necessarily result in higher disclosure rates of sensitive information than lesser privacy and lower incentives. Furthermore, disclosure of sensitive or stigmatizing information under differing privacy conditions may have less to do with promoting or impeding participants’ “honesty” or “accuracy” than with selectively recruiting or attracting subpopulations that are higher or lower in such experiences. Pre-merged questionnaires bypassed many historical limitations of anonymous surveys and hold promise for exploring non-response issues in future research. PMID:25027174
How Do Vision and Hearing Impact Pedestrian Time-to-Arrival Judgments?
Roper, JulieAnne M.; Hassan, Shirin E.
2014-01-01
Purpose To determine how accurate normally-sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (i) using both habitual vision and hearing; and (ii) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (ie the TTA). The actual TTA of vehicles was also measured using custom made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome”. Results Error in TTA judgments changed significantly as a function of both the actual TTA (p<0.0001) and sensory condition (p<0.0001). While no main effect for gender was found (p=0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p<0.0001). Females tended to be as accurate under either condition (p≥0.01) with the exception of TTA judgments made when the actual TTA was two seconds or less and eight seconds or longer, during which the vision and hearing condition was more accurate (p≤0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values five seconds or less (p<0.0001), after which there were no significant differences between the two conditions (p≥0.01). Conclusions Our data suggests that males and females use visual and auditory information differently when making TTA judgments. While the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only. PMID:24509543
NASA Technical Reports Server (NTRS)
Min, J. B.; Reddy, T. S. R.; Bakhle, M. A.; Coroneos, R. M.; Stefko, G. L.; Provenza, A. J.; Duffy, K. P.
2018-01-01
Accurate prediction of the blade vibration stress is required to determine overall durability of fan blade design under Boundary Layer Ingestion (BLI) distorted flow environments. Traditional single blade modeling technique is incapable of representing accurate modeling for the entire rotor blade system subject to complex dynamic loading behaviors and vibrations in distorted flow conditions. A particular objective of our work was to develop a high-fidelity full-rotor aeromechanics analysis capability for a system subjected to a distorted inlet flow by applying cyclic symmetry finite element modeling methodology. This reduction modeling method allows computationally very efficient analysis using a small periodic section of the full rotor blade system. Experimental testing by the use of the 8-foot by 6-foot Supersonic Wind Tunnel Test facility at NASA Glenn Research Center was also carried out for the system designated as the Boundary Layer Ingesting Inlet/Distortion-Tolerant Fan (BLI2DTF) technology development. The results obtained from the present numerical modeling technique were evaluated with those of the wind tunnel experimental test, toward establishing a computationally efficient aeromechanics analysis modeling tool facilitating for analyses of the full rotor blade systems subjected to a distorted inlet flow conditions. Fairly good correlations were achieved hence our computational modeling techniques were fully demonstrated. The analysis result showed that the safety margin requirement set in the BLI2DTF fan blade design provided a sufficient margin with respect to the operating speed range.
A smart spirometry device for asthma diagnosis.
Kassem, A; Hamad, M; El Moucary, C
2015-08-01
In this paper an innovative prototype for smart asthma spirometry device to be used by doctors and asthma patients is presented. The novelty in this prototype relies in the fact that it is destined to subtend not only adults but offers an efficient and attractive manner to accommodate children patients as well thus, making it efficient for doctors, patients and parents to detect and monitor such intricate cases at stages as early as six years old. Moreover, the apparatus used enables us to integrate a vital parameter representing the Forced Expiratory Volume to the final diagnosis. Besides, the presented device will automatically diagnose those patients, assess their asthma condition, and schedule their medication process without excessive visits to medical centers whilst providing doctors with accurate and pertinent and comprehensive medical data in a chronological fashion. Zooming into under the hood of the device, a fully reliable hardware digital system lies along with a flowmeter detector and a Bluetooth emitter to interface with a user-friendly GUI-based application installed on smartphones which incorporates appealing animated graphics to encourage children to take the test. Furthermore, the device offers the capability of storing chronological data and a relevant resourceful display for accurate tracking of patients' medical record, the evolvement of their asthma condition, and the administered medication. Finally, the entire device is aligned with the medical requirements as per doctors' and telemedicine specialists' recommendations; the experiments carried out demonstrated the effectiveness and sustainable use of such device.
An Exploratory Study of Selected Sexual Knowledge and Attitudes of Indiana Adults
ERIC Educational Resources Information Center
Clark, Christina A.; Baldwin, Kathleen L.; Tanner, Amanda E.
2007-01-01
Although there are numerous ways to obtain accurate information about sexuality, research suggests that many American adults do not have accurate sexuality and sexual health knowledge. This research investigated selected sexual knowledge and attitudes of adults in Indiana. A representative sample of men (n = 158) and women (n = 340) aged 18 to 89…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Cappa, Christopher D.; Fan, Jiwen
Anthropogenic emissions and land use changes have modified atmospheric aerosol concentrations and size distributions over time. Understanding preindustrial conditions and changes in organic aerosol due to anthropogenic activities is important because these features (1) influence estimates of aerosol radiative forcing and (2) can confound estimates of the historical response of climate to increases in greenhouse gases. Secondary organic aerosol (SOA), formed in the atmosphere by oxidation of organic gases, represents a major fraction of global submicron-sized atmospheric organic aerosol. Over the past decade, significant advances in understanding SOA properties and formation mechanisms have occurred through measurements, yet current climate modelsmore » typically do not comprehensively include all important processes. Our review summarizes some of the important developments during the past decade in understanding SOA formation. We also highlight the importance of some processes that influence the growth of SOA particles to sizes relevant for clouds and radiative forcing, including formation of extremely low volatility organics in the gas phase, acid-catalyzed multiphase chemistry of isoprene epoxydiols, particle-phase oligomerization, and physical properties such as volatility and viscosity. Several SOA processes highlighted in this review are complex and interdependent and have nonlinear effects on the properties, formation, and evolution of SOA. Current global models neglect this complexity and nonlinearity and thus are less likely to accurately predict the climate forcing of SOA and project future climate sensitivity to greenhouse gases. Efforts are also needed to rank the most influential processes and nonlinear process-related interactions, so that these processes can be accurately represented in atmospheric chemistry-climate models.« less
Shrivastava, Manish; Cappa, Christopher D.; Fan, Jiwen; ...
2017-06-15
Anthropogenic emissions and land use changes have modified atmospheric aerosol concentrations and size distributions over time. Understanding preindustrial conditions and changes in organic aerosol due to anthropogenic activities is important because these features (1) influence estimates of aerosol radiative forcing and (2) can confound estimates of the historical response of climate to increases in greenhouse gases. Secondary organic aerosol (SOA), formed in the atmosphere by oxidation of organic gases, represents a major fraction of global submicron-sized atmospheric organic aerosol. Over the past decade, significant advances in understanding SOA properties and formation mechanisms have occurred through measurements, yet current climate modelsmore » typically do not comprehensively include all important processes. Our review summarizes some of the important developments during the past decade in understanding SOA formation. We also highlight the importance of some processes that influence the growth of SOA particles to sizes relevant for clouds and radiative forcing, including formation of extremely low volatility organics in the gas phase, acid-catalyzed multiphase chemistry of isoprene epoxydiols, particle-phase oligomerization, and physical properties such as volatility and viscosity. Several SOA processes highlighted in this review are complex and interdependent and have nonlinear effects on the properties, formation, and evolution of SOA. Current global models neglect this complexity and nonlinearity and thus are less likely to accurately predict the climate forcing of SOA and project future climate sensitivity to greenhouse gases. Efforts are also needed to rank the most influential processes and nonlinear process-related interactions, so that these processes can be accurately represented in atmospheric chemistry-climate models.« less
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ketamine-xylazine anesthesia causes hyperopic refractive shift in mice
Tkatchenko, Tatiana V.; Tkatchenko, Andrei V.
2010-01-01
Mice have increasingly been used as a model for studies of myopia. The key to successful use of mice for myopia research is the ability to obtain accurate measurements of refractive status of their eyes. In order to obtain accurate measurements of refractive errors in mice, the refraction needs to be performed along the optical axis of the eye. This represents a particular challenge, because mice are very difficult to immobilize. Recently, ketamine-xylazine anesthesia has been used to immobilize mice before measuring refractive errors, in combination with tropicamide ophthalmic solution to induce mydriasis. Although these drugs have increasingly been used while refracting mice, their effects on the refractive state of the mouse eye have not yet been investigated. Therefore, we have analyzed the effects of tropicamide eye drops and ketamine-xylazine anesthesia on refraction in P40 C57BL/6J mice. We have also explored two alternative methods to immobilize mice, i.e. the use of a restraining platform and pentobarbital anesthesia. We found that tropicamide caused a very small, but statistically significant, hyperopic shift in refraction. Pentobarbital did not have any substantial effect on refractive status, whereas ketamine-xylazine caused a large and highly significant hyperopic shift in refraction. We also found that the use of a restraining platform represents good alternative for immobilization of mice prior to refraction. Thus, our data suggest that ketamine-xylazine anesthesia should be avoided in studies of refractive development in mice and underscore the importance of providing appropriate experimental conditions when measuring refractive errors in mice. PMID:20813132
Recent advances in understanding secondary organic aerosol: Implications for global climate forcing
NASA Astrophysics Data System (ADS)
Shrivastava, Manish; Cappa, Christopher D.; Fan, Jiwen; Goldstein, Allen H.; Guenther, Alex B.; Jimenez, Jose L.; Kuang, Chongai; Laskin, Alexander; Martin, Scot T.; Ng, Nga Lee; Petaja, Tuukka; Pierce, Jeffrey R.; Rasch, Philip J.; Roldin, Pontus; Seinfeld, John H.; Shilling, John; Smith, James N.; Thornton, Joel A.; Volkamer, Rainer; Wang, Jian; Worsnop, Douglas R.; Zaveri, Rahul A.; Zelenyuk, Alla; Zhang, Qi
2017-06-01
Anthropogenic emissions and land use changes have modified atmospheric aerosol concentrations and size distributions over time. Understanding preindustrial conditions and changes in organic aerosol due to anthropogenic activities is important because these features (1) influence estimates of aerosol radiative forcing and (2) can confound estimates of the historical response of climate to increases in greenhouse gases. Secondary organic aerosol (SOA), formed in the atmosphere by oxidation of organic gases, represents a major fraction of global submicron-sized atmospheric organic aerosol. Over the past decade, significant advances in understanding SOA properties and formation mechanisms have occurred through measurements, yet current climate models typically do not comprehensively include all important processes. This review summarizes some of the important developments during the past decade in understanding SOA formation. We highlight the importance of some processes that influence the growth of SOA particles to sizes relevant for clouds and radiative forcing, including formation of extremely low volatility organics in the gas phase, acid-catalyzed multiphase chemistry of isoprene epoxydiols, particle-phase oligomerization, and physical properties such as volatility and viscosity. Several SOA processes highlighted in this review are complex and interdependent and have nonlinear effects on the properties, formation, and evolution of SOA. Current global models neglect this complexity and nonlinearity and thus are less likely to accurately predict the climate forcing of SOA and project future climate sensitivity to greenhouse gases. Efforts are also needed to rank the most influential processes and nonlinear process-related interactions, so that these processes can be accurately represented in atmospheric chemistry-climate models.
NASA Technical Reports Server (NTRS)
Hanold, Gregg T.; Hanold, David T.
2010-01-01
This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.
Dynamic metabolic modeling of heterotrophic and mixotrophic microalgal growth on fermentative wastes
Baroukh, Caroline; Turon, Violette; Bernard, Olivier
2017-01-01
Microalgae are promising microorganisms for the production of numerous molecules of interest, such as pigments, proteins or triglycerides that can be turned into biofuels. Heterotrophic or mixotrophic growth on fermentative wastes represents an interesting approach to achieving higher biomass concentrations, while reducing cost and improving the environmental footprint. Fermentative wastes generally consist of a blend of diverse molecules and it is thus crucial to understand microalgal metabolism in such conditions, where switching between substrates might occur. Metabolic modeling has proven to be an efficient tool for understanding metabolism and guiding the optimization of biomass or target molecule production. Here, we focused on the metabolism of Chlorella sorokiniana growing heterotrophically and mixotrophically on acetate and butyrate. The metabolism was represented by 172 metabolic reactions. The DRUM modeling framework with a mildly relaxed quasi-steady-state assumption was used to account for the switching between substrates and the presence of light. Nine experiments were used to calibrate the model and nine experiments for the validation. The model efficiently predicted the experimental data, including the transient behavior during heterotrophic, autotrophic, mixotrophic and diauxic growth. It shows that an accurate model of metabolism can now be constructed, even in dynamic conditions, with the presence of several carbon substrates. It also opens new perspectives for the heterotrophic and mixotrophic use of microalgae, especially for biofuel production from wastes. PMID:28582469
A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.
Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy
2010-01-18
We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.
How to constrain multi-objective calibrations of the SWAT model using water balance components
USDA-ARS?s Scientific Manuscript database
Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...
40 CFR 60.4151 - Establishment of accounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... information are to the best of my knowledge and belief true, accurate, and complete. I am aware that there are... certified in accordance with paragraph (b)(2)(ii) of this section. (3) Changing Hg authorized account representative and alternate Hg authorized account representative; changes in persons with ownership interest. (i...
Magnitude Knowledge: The Common Core of Numerical Development
ERIC Educational Resources Information Center
Siegler, Robert S.
2016-01-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…
Magnitude Knowledge: The Common Core of Numerical Development
ERIC Educational Resources Information Center
Siegler, Robert S.
2016-01-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…
surface reports in the NMC observational files. This revision represents the final update to NMC/NCEP Office Note Number 124. This format for representing meteorological surface observational data at NMC observational data format at NCEP. An accurate version of this Office Note is still necessary for historical
Human Judgment and Decision Making: Models and Applications.
ERIC Educational Resources Information Center
Loke, Wing Hong
This document notes that researchers study the processes involved in judgment and decision making and prescribe theories and models that reflect the behavior of the decision makers. It addresses the various models that are used to represent judgment and decision making, with particular interest in models that more accurately represent human…
22 CFR 61.3 - Certification and authentication criteria.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Section 61.3 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF... misrepresentation of the United States or other countries, or their people or institutions; (3) It is not representative, authentic, or accurate or does not represent the current state of factual knowledge of a subject...
10 CFR Appendix B to Subpart T of... - Certification Report for Certain Commercial Equipment
Code of Federal Regulations, 2011 CFR
2011-01-01
... information reported in this Certification Report(s) is true, accurate, and complete. The company is aware of... Federal Government. Name of Company Official or Third-Party Representative: Signature of Company Official or Third-Party Representative: Title: Date: Equipment Type: Manufacturer: Private Labeler (if...
Karimi, Mohammad Taghi
2015-01-01
Heart rate is an accurate and easy to use method to represent the energy expenditure during walking, based on physiological cost index (PCI). However, in some conditions the heart rate during walking does not reach to a steady state. Therefore, it is not possible to determine the energy expenditure by use of the PCI index. The total heart beat index (THBI) is a new method to solve the aforementioned problem. The aim of this research project was to find the sensitivity of both the physiological cost index (PCI) and total heart beat index (THBI). Fifteen normal subjects and ten patients with flatfoot disorder and two subjects with spinal cord injury were recruited in this research project. The PCI and THBI indexes were determined by use of heart beats with respect to walking speed and total distance walked, respectively. The sensitivity of PCI was more than that of THBI index in the three groups of subjects. Although the PCI and THBI indexes are easy to use and reliable parameters to represent the energy expenditure during walking, their sensitivity is not high to detect the influence of some orthotic interventions, such as use of insoles or using shoes on energy expenditure during walking.
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhan, H.
2017-12-01
Horizontal drilling becomes an appealing technology for water exploration or aquifer remediation in recent decades, due to the decreasing operational cost and many technical advantages over the vertical wells. However, many previous studies on the flow into horizontal wells were based on the uniform flux boundary condition (UFBC) for treating horizontal wells, which could not reflect the physical processes of flow inside the well accurately. In this study, we investigated transient flow into a horizontal well in an anisotropic confined aquifer between two streams for three types of boundary conditions of treating the horizontal well, including UFBC, uniform head boundary condition (UHBC), and mixed-type boundary condition (MTBC). The MTBC model considered both kinematic and frictional effects inside the horizontal well, in which the kinematic effect referred to the accelerational and fluid inflow effects. The new solution of UFBC was derived by superimposing the point sink/source solutions along the axis of the horizontal well with a uniform strength. The solutions of UHBC and MTBC were obtained by a hybrid analytical-numerical method, and an iterative method was proposed to determine the minimum well segment number required to yield sufficiently accurate answer. The results showed that the differences among the UFBC, UHBC, MTBCFriction and MTBC solutions were obvious, in which MTBCFriction represented the solutions considering the frictional effect but ignoring the kinematic effect. The MTBCFriction and MTBC solutions were sensitive to the flow rate, and the difference of these two solutions increases with the flow rate, suggesting that the kinematic effect could not be ignored for studying flow to a horizontal well, especially when the flow rate is great. The well specific inflow (WSI) (which is the inflow per unit screen length at a specified location of the horizontal well) increased with the distance along the wellbore for the MTBC model at early stage, while the minimum WSI moved to the well center with time going, following a cubic polynomial function.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Meal Microstructure Characterization from Sensor-Based Food Intake Detection.
Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A; Higgins, Janine A; Sazonov, Edward
2017-01-01
To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1-30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button ( p -value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10-30s ( p -value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.
Meal Microstructure Characterization from Sensor-Based Food Intake Detection
Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward
2017-01-01
To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary. PMID:28770206
NASA Astrophysics Data System (ADS)
Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin
2018-01-01
Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures
NASA Astrophysics Data System (ADS)
Rowley, R. L.; Stoker, J. M.; Giles, N. F.
1991-05-01
The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.
Multisensory Self-Motion Compensation During Object Trajectory Judgments
Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.
2015-01-01
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317
NASA Astrophysics Data System (ADS)
Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.
2013-11-01
The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ştefănescu, R., E-mail: rstefane@vt.edu; Sandu, A., E-mail: sandu@cs.vt.edu; Navon, I.M., E-mail: inavon@fsu.edu
2015-08-15
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush–Kuhn–Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov–Galerkin projections. For the first time POD, tensorialmore » POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov–Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.« less
Kawai, Nobuyuki; He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.
He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686
Modeling of Fission Gas Release in UO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
MH Krohn
2006-01-23
A two-stage gas release model was examined to determine if it could provide a physically realistic and accurate model for fission gas release under Prometheus conditions. The single-stage Booth model [1], which is often used to calculate fission gas release, is considered to be oversimplified and not representative of the mechanisms that occur during fission gas release. Two-stage gas release models require saturation at the grain boundaries before gas is release, leading to a time delay in release of gases generated in the fuel. Two versions of a two-stage model developed by Forsberg and Massih [2] were implemented using Mathcadmore » [3]. The original Forsbers and Massih model [2] and a modified version of the Forsberg and Massih model that is used in a commercially available fuel performance code (FRAPCON-3) [4] were examined. After an examination of these models, it is apparent that without further development and validation neither of these models should be used to calculate fission gas release under Prometheus-type conditions. There is too much uncertainty in the input parameters used in the models. In addition. the data used to tune the modified Forsberg and Massih model (FRAPCON-3) was collected under commercial reactor conditions, which will have higher fission rates relative to Prometheus conditions [4].« less
Wetlands inform how climate extremes influence surface water expansion and contraction
NASA Astrophysics Data System (ADS)
Vanderhoof, Melanie K.; Lane, Charles R.; McManus, Michael G.; Alexander, Laurie C.; Christensen, Jay R.
2018-03-01
Effective monitoring and prediction of flood and drought events requires an improved understanding of how and why surface water expansion and contraction in response to climate varies across space. This paper sought to (1) quantify how interannual patterns of surface water expansion and contraction vary spatially across the Prairie Pothole Region (PPR) and adjacent Northern Prairie (NP) in the United States, and (2) explore how landscape characteristics influence the relationship between climate inputs and surface water dynamics. Due to differences in glacial history, the PPR and NP show distinct patterns in regards to drainage development and wetland density, together providing a diversity of conditions to examine surface water dynamics. We used Landsat imagery to characterize variability in surface water extent across 11 Landsat path/rows representing the PPR and NP (images spanned 1985-2015). The PPR not only experienced a 2.6-fold greater surface water extent under median conditions relative to the NP, but also showed a 3.4-fold greater change in surface water extent between drought and deluge conditions. The relationship between surface water extent and accumulated water availability (precipitation minus potential evapotranspiration) was quantified per watershed and statistically related to variables representing hydrology-related landscape characteristics (e.g., infiltration capacity, surface storage capacity, stream density). To investigate the influence stream connectivity has on the rate at which surface water leaves a given location, we modeled stream-connected and stream-disconnected surface water separately. Stream-connected surface water showed a greater expansion with wetter climatic conditions in landscapes with greater total wetland area, but lower total wetland density. Disconnected surface water showed a greater expansion with wetter climatic conditions in landscapes with higher wetland density, lower infiltration and less anthropogenic drainage. From these findings, we can expect that shifts in precipitation and evaporative demand will have uneven effects on surface water quantity. Accurate predictions regarding the effect of climate change on surface water quantity will require consideration of hydrology-related landscape characteristics including wetland storage and arrangement.
Spurr, Josiah Edward; Emmons, S.F.
1896-01-01
From the base of the Wasatch Mountains on the east to that of the Sierra Nevada on the west stretches an arid region known to the early geographers as the Great American Desert, but more recently and accurately called the Great Basin, for the reason that it has no external drainage to the ocean. Geological investigation has shown that this region was once occupied by two large and distinct fresh-water seas, which have gradually disappeared by evaporation under the influence of slowly changing climatic conditions, until at the present day they are represented by relatively small saline lakes at the eastern and western extremities of the region, respectively.
NASA Technical Reports Server (NTRS)
Brune, G. W.; Weber, J. A.; Johnson, F. T.; Lu, P.; Rubbert, P. E.
1975-01-01
A method of predicting forces, moments, and detailed surface pressures on thin, sharp-edged wings with leading-edge vortex separation in incompressible flow is presented. The method employs an inviscid flow model in which the wing and the rolled-up vortex sheets are represented by piecewise, continuous quadratic doublet sheet distributions. The Kutta condition is imposed on all wing edges. Computed results are compared with experimental data and with the predictions of the leading-edge suction analogy for a selected number of wing planforms over a wide range of angle of attack. These comparisons show the method to be very promising, capable of producing not only force predictions, but also accurate predictions of detailed surface pressure distributions, loads, and moments.
Defining quality in radiology.
Blackmore, C Craig
2007-04-01
The introduction of pay for performance in medicine represents an opportunity for radiologists to define quality in radiology. Radiology quality can be defined on the basis of the production model that currently drives reimbursement, codifying the role of radiologists as being limited to the production of timely and accurate radiology reports produced in conditions of maximum patient safety and communicated in a timely manner. Alternately, quality in radiology can also encompass the professional role of radiologists as diagnostic imaging specialists responsible for the appropriate use, selection, interpretation, and application of imaging. Although potentially challenging to implement, the professional model for radiology quality is a comprehensive assessment of the ways in which radiologists add value to patient care. This essay is a discussion of the definition of radiology quality and the implications of that definition.
Load Weight Classification of The Quayside Container Crane Based On K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Bingqian; Hu, Xiong; Tang, Gang; Wang, Yide
2017-07-01
The precise knowledge of the load weight of each operation of the quayside container crane is important for accurately assessing the service life of the crane. The load weight is directly related to the vibration intensity. Through the study on the vibration of the hoist motor of the crane in radial and axial directions, we can classify the load using K-means clustering algorithm and quantitative statistical analysis. Vibration in radial direction is significantly and positively correlated with that in axial direction by correlation analysis, which means that we can use the data only in one of the directions to carry out the study improving then the efficiency without degrading the accuracy of load classification. The proposed method can well represent the real-time working condition of the crane.
Nonlinear Analysis and Preliminary Testing Results of a Hybrid Wing Body Center Section Test Article
NASA Technical Reports Server (NTRS)
Przekop, Adam; Jegley, Dawn C.; Rouse, Marshall; Lovejoy, Andrew E.; Wu, Hsi-Yung T.
2015-01-01
A large test article was recently designed, analyzed, fabricated, and successfully tested up to the representative design ultimate loads to demonstrate that stiffened composite panels with through-the-thickness reinforcement are a viable option for the next generation large transport category aircraft, including non-conventional configurations such as the hybrid wing body. This paper focuses on finite element analysis and test data correlation of the hybrid wing body center section test article under mechanical, pressure and combined load conditions. Good agreement between predictive nonlinear finite element analysis and test data is found. Results indicate that a geometrically nonlinear analysis is needed to accurately capture the behavior of the non-circular pressurized and highly-stressed structure when the design approach permits local buckling.
A multisensor evaluation of the asymmetric convective model, version 2, in southeast Texas.
Kolling, Jenna S; Pleim, Jonathan E; Jeffries, Harvey E; Vizuete, William
2013-01-01
There currently exist a number of planetary boundary layer (PBL) schemes that can represent the effects of turbulence in daytime convective conditions, although these schemes remain a large source of uncertainty in meteorology and air quality model simulations. This study evaluates a recently developed combined local and nonlocal closure PBL scheme, the Asymmetric Convective Model, version 2 (ACM2), against PBL observations taken from radar wind profilers, a ground-based lidar, and multiple daytime radiosonde balloon launches. These observations were compared against predictions of PBLs from the Weather Research and Forecasting (WRF) model version 3.1 with the ACM2 PBL scheme option, and the Fifth-Generation Meteorological Model (MM5) version 3.7.3 with the Eta PBL scheme option that is currently being used to develop ozone control strategies in southeast Texas. MM5 and WRF predictions during the regulatory modeling episode were evaluated on their ability to predict the rise and fall of the PBL during daytime convective conditions across southeastern Texas. The MM5 predicted PBLs consistently underpredicted observations, and were also less than the WRF PBL predictions. The analysis reveals that the MM5 predicted a slower rising and shallower PBL not representative of the daytime urban boundary layer. Alternatively, the WRF model predicted a more accurate PBL evolution improving the root mean square error (RMSE), both temporally and spatially. The WRF model also more accurately predicted vertical profiles of temperature and moisture in the lowest 3 km of the atmosphere. Inspection of median surface temperature and moisture time-series plots revealed higher predicted surface temperatures in WRF and more surface moisture in MM5. These could not be attributed to surface heat fluxes, and thus the differences in performance of the WRF and MM5 models are likely due to the PBL schemes. An accurate depiction of the diurnal evolution of the planetary boundary layer (PBL) is necessary for realistic air quality simulations, and for formulating effective policy. The meteorological model used to support the southeast Texas 03 attainment demonstration made predictions of the PBL that were consistently less than those found in observations. The use of the Asymmetric Convective Model, version 2 (ACM2), predicted taller PBL heights and improved model predictions. A lower predicted PBL height in an air quality model would increase precursor concentrations and change the chemical production of O3 and possibly the response to control strategies.
Eyeglasses Lens Contour Extraction from Facial Images Using an Efficient Shape Description
Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu
2013-01-01
This paper presents a system that automatically extracts the position of the eyeglasses and the accurate shape and size of the frame lenses in facial images. The novelty brought by this paper consists in three key contributions. The first one is an original model for representing the shape of the eyeglasses lens, using Fourier descriptors. The second one is a method for generating the search space starting from a finite, relatively small number of representative lens shapes based on Fourier morphing. Finally, we propose an accurate lens contour extraction algorithm using a multi-stage Monte Carlo sampling technique. Multiple experiments demonstrate the effectiveness of our approach. PMID:24152926
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
NASA Astrophysics Data System (ADS)
Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.
2018-06-01
We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling, and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose three modifications to SAMs that will provide more accurate high-redshift simulations. These include (1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; (2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and (3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.
Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions
NASA Astrophysics Data System (ADS)
Kim, K.; Rodgers, A. J.
2015-12-01
Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Ensemble perception of color in autistic adults.
Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna
2017-05-01
Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Interstitial distribution of charged macromolecules in the dog lung: a kinetic model.
Parker, J C; Miniati, M; Pitt, R; Taylor, A E
1987-01-01
A mathematic model was constructed to investigate conflicting physiologic data concerning the charge effect of continuous capillaries to macromolecules in the lung. We simulated the equilibration kinetics of lactate dehydrogenase (MR 4.2 nM) isozymes LDH 1 (pI = 5.0) and LDH 5 (pI = 7.9) between plasma and lymph using previously measured permeability coefficients, lung tissue distribution volumes (VA) and plasma concentrations (CP) in lung tissue. Our hypothesis is that the fixed anionic charges in interstitium, basement membrane, and cell surfaces determine equilibration rather than charged membrane effects at the capillary barrier, so the same capillary permeability coefficients were used for both isozymes. Capillary filtration rates and protein fluxes were calculated using conventional flux equations. Initial conditions at baseline and increased left atrial pressures (PLA) were those measured in animal studies. Simulated equilibration of isozymes over 30 h in the model at baseline capillary pressures accurately predicted the observed differences in lymph/plasma concentration ratios (CL/CP) between isotopes at 4 h and equilibration of these ratios at 24 h. Quantitative prediction of isozyme CL/CP ratios was also obtained at increased PLA. However, an additional cation selective compartment representing the surface glycocalyx was required to accurately simulate the initial higher transcapillary clearances of cationic LDH 5. Thus experimental data supporting the negative barrier, positive barrier, and no charge barrier hypotheses were accurately reproduced by the model using only the observed differences in interstitial partitioning of isozymes without differences in capillary selectivity.
Ensemble perception of color in autistic adults
Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna
2016-01-01
Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263
NASA Astrophysics Data System (ADS)
Zhou, Bowen; Chow, Fotini
2012-11-01
This numerical study investigates the nighttime flow dynamics in a steep valley. The Owens Valley in California is highly complex, and represents a challenging terrain for large-eddy simulations (LES). To ensure a faithful representation of the nighttime atmospheric boundary layer (ABL), realistic external boundary conditions are provided through grid nesting. The model obtains initial and lateral boundary conditions from reanalysis data, and bottom boundary conditions from a land-surface model. We demonstrate the ability to extend a mesoscale model to LES resolutions through a systematic grid-nesting framework, achieving accurate simulations of the stable ABL over complex terrain. Nighttime cold-air flow was channeled through a gap on the valley sidewall. The resulting katabatic current induced a cross-valley flow. Directional shear against the down-valley flow in the lower layers of the valley led to breaking Kelvin-Helmholtz waves at the interface, which is captured only on the LES grid. Later that night, the flow transitioned from down-slope to down-valley near the western sidewall, leading to a transient warming episode. Simulation results are verified against field observations and reveal good spatial and temporal precision. Supported by NSF grant ATM-0645784.
A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale
Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; ...
2017-07-20
Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less
A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale
NASA Astrophysics Data System (ADS)
Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; Bochev, Pavel
2017-11-01
We develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson-Nernst-Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independently on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.
A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, James; Frischknecht, Amalie L.; Perego, Mauro
Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less
Inaccuracy of a physical strain trainer for the monitoring of partial weight bearing.
Pauser, Johannes; Jendrissek, Andreas; Swoboda, Bernd; Gelse, Kolja; Carl, Hans-Dieter
2011-11-01
To investigate the use of a physical strain trainer for the monitoring of partial weight bearing. Case series with healthy volunteers. Orthopedic clinic. Healthy volunteers (N=10) with no history of foot complaints. Volunteers were taught to limit weight bearing to 10% body weight (BW) and 50% BW, monitored by a physical strain trainer. The parameters peak pressure, maximum force, force-time integral, and pressure-time integral were assessed by dynamic pedobarography when volunteers walked with full BW (condition 1), 50% BW (condition 2), and 10% BW (condition 3). With 10% BW (condition 3), forces with normative gait (condition 1) were statistically significantly reduced under the hindfoot where the physical strain trainer is placed. All pedobarographic parameters were, however, exceeded when the total foot was measured. A limitation to 10% BW with the physical strain trainer (condition 3) was equal to a bisection of peak pressure and maximum force for the total foot with normative gait (condition 1). Halved BW (condition 2) left a remaining mean 82% of peak pressure and mean 59% of maximum force from full BW (condition 1). The concept of controlling partial weight bearing with the hindfoot-addressing device does not represent complete foot loading. Such devices may be preferably applied in cases when the hindfoot in particular must be off-loaded. Other training devices (eg, biofeedback soles) that monitor forces of the total foot have to be used to control partial weight bearing of the lower limb accurately. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Evaluation of Far-Field Boundary Conditions for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Kreider, Kevin L.; Heminger, John A.
2002-01-01
This paper presents a detailed situ dy of four far-field boundary conditions used in solving the single airfoil gust response problem. The boundary conditions, examined are the partial Sommerfeld radiation condition with only radial derivatives, the full Sommerfeld radiation condition with both radial and tangential derivatives, the Bayliss-Turkel condition of order one, and the Hagstrom-Hariharan condition of order one. The main objectives of the study were to determine which far-field boundary condition was most accurate, which condition was least sensitive to changes in grid. and which condition was best overall in terms of both accuracy and efficiency. Through a systematic study of the flat plate gust response problem, it was determined that the Hagstrom-Hariharan condition was most accurate, the Bayliss-Turkel condition was least sensitive to changes in grid, and Bayliss-Turkel was best in terms of both accuracy and efficiency.
Impact of study design on development and evaluation of an activity-type classifier.
van Hees, Vincent T; Golubic, Rajna; Ekelund, Ulf; Brage, Søren
2013-04-01
Methods to classify activity types are often evaluated with an experimental protocol involving prescribed physical activities under confined (laboratory) conditions, which may not reflect real-life conditions. The present study aims to evaluate how study design may impact on classifier performance in real life. Twenty-eight healthy participants (21-53 yr) were asked to wear nine triaxial accelerometers while performing 58 activity types selected to simulate activities in real life. For each sensor location, logistic classifiers were trained in subsets of up to 8 activities to distinguish between walking and nonwalking activities and were then evaluated in all 58 activities. Different weighting factors were used to convert the resulting confusion matrices into an estimation of the confusion matrix as would apply in the real-life setting by creating four different real-life scenarios, as well as one traditional laboratory scenario. The sensitivity of a classifier estimated with a traditional laboratory protocol is within the range of estimates derived from real-life scenarios for any body location. The specificity, however, was systematically overestimated by the traditional laboratory scenario. Walking time was systematically overestimated, except for lower back sensor data (range: 7-757%). In conclusion, classifier performance under confined conditions may not accurately reflect classifier performance in real life. Future studies that aim to evaluate activity classification methods are warranted to pay special attention to the representativeness of experimental conditions for real-life conditions.
NASA Astrophysics Data System (ADS)
O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.
2014-05-01
Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.
Deficits in Category Learning in Older Adults: Rule-Based Versus Clustering Accounts
2017-01-01
Memory research has long been one of the key areas of investigation for cognitive aging researchers but only in the last decade or so has categorization been used to understand age differences in cognition. Categorization tasks focus more heavily on the grouping and organization of items in memory, and often on the process of learning relationships through trial and error. Categorization studies allow researchers to more accurately characterize age differences in cognition: whether older adults show declines in the way in which they represent categories with simple rules or declines in representing categories by similarity to past examples. In the current study, young and older adults participated in a set of classic category learning problems, which allowed us to distinguish between three hypotheses: (a) rule-complexity: categories were represented exclusively with rules and older adults had differential difficulty when more complex rules were required, (b) rule-specific: categories could be represented either by rules or by similarity, and there were age deficits in using rules, and (c) clustering: similarity was mainly used and older adults constructed a less-detailed representation by lumping more items into fewer clusters. The ordinal levels of performance across different conditions argued against rule-complexity, as older adults showed greater deficits on less complex categories. The data also provided evidence against rule-specificity, as single-dimensional rules could not explain age declines. Instead, computational modeling of the data indicated that older adults utilized fewer conceptual clusters of items in memory than did young adults. PMID:28816474
Yuan, Long; Ma, Li; Dillon, Lisa; Fancher, R Marcus; Sun, Huadong; Zhu, Mingshe; Lehman-McKeeman, Lois; Aubry, Anne-Françoise; Ji, Qin C
2016-11-16
LC-MS/MS has been widely applied to the quantitative analysis of tissue samples. However, one key remaining issue is that the extraction recovery of analyte from spiked tissue calibration standard and quality control samples (QCs) may not accurately represent the "true" recovery of analyte from incurred tissue samples. This may affect the accuracy of LC-MS/MS tissue bioanalysis. Here, we investigated whether the recovery determined using tissue QCs by LC-MS/MS can accurately represent the "true" recovery from incurred tissue samples using two model compounds: BMS-986104, a S1P 1 receptor modulator drug candidate, and its phosphate metabolite, BMS-986104-P. We first developed a novel acid and surfactant assisted protein precipitation method for the extraction of BMS-986104 and BMS-986104-P from rat tissues, and determined their recoveries using tissue QCs by LC-MS/MS. We then used radioactive incurred samples from rats dosed with 3 H-labeled BMS-986104 to determine the absolute total radioactivity recovery in six different tissues. The recoveries determined using tissue QCs and incurred samples matched with each other very well. The results demonstrated that, in this assay, tissue QCs accurately represented the incurred tissue samples to determine the "true" recovery, and LC-MS/MS assay was accurate for tissue bioanalysis. Another aspect we investigated is how the tissue QCs should be prepared to better represent the incurred tissue samples. We compared two different QC preparation methods (analyte spiked in tissue homogenates or in intact tissues) and demonstrated that the two methods had no significant difference when a good sample preparation was in place. The developed assay showed excellent accuracy and precision, and was successfully applied to the quantitative determination of BMS-986104 and BMS-986104-P in tissues in a rat toxicology study. Copyright © 2016 Elsevier B.V. All rights reserved.
Mental simulation of drawing actions enhances delayed recall of a complex figure.
De Lucia, Natascia; Trojano, Luigi; Senese, Vincenzo Paolo; Conson, Massimiliano
2016-10-01
Motor simulation implies that the same motor representations involved in action execution are re-enacted during observation or imagery of actions. Neurofunctional data suggested that observation of letters or abstract paintings can elicit simulation of writing or drawing gestures. We performed four behavioural experiments on right-handed healthy participants to test whether observation of a static and complex geometrical figure implies re-enactment of drawing actions. In Experiment 1, participants had to observe the stimulus without explicit instruction (observation-only condition), while performing irrelevant finger tapping (motor dual task), or while articulating irrelevant verbal material (verbal dual task). Delayed drawing of the stimulus was less accurate in the motor dual-task (interfering with simulation of hand actions) than in verbal dual-task and observation-only conditions. In Experiment 2, delayed drawing in the observation only was as accurate as when participants encoded the stimulus by copying it; in both conditions, accuracy was higher than when participants were instructed to observe the stimulus to recall it later verbally (observe to recall), thus being discouraged from engaging motor simulation. In Experiment 3, delayed drawing was as accurate in the observation-only condition as when participants imagined copying the stimulus; accuracy in both conditions was higher than in the observe-to-recall condition. In Experiment 4, in the observe-only condition participants who observed the stimulus with their right arm hidden behind their back were significantly less accurate than participants who had their left arm hidden. These findings converge in suggesting that mere observation of a geometrical stimulus can activate motor simulation and re-enactment of drawing actions.
Risch, Martin; Nydegger, Urs; Risch, Lorenz
2017-01-01
In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be considered representative for seniors in a status of age-specific normal health. In addition to the established methods for determining reference intervals, this longitudinal study utilizes a unique approach, in that survival and long-term well-being are taken as indicators of health in seniors. This approach is expected to provide robust and representative reference intervals that are obtained from an adequate reference population and not a collective of highly selected individuals. The present study was registered under International Standard Randomized Controlled Trial Number registry: ISRCTN53778569.
NASA Astrophysics Data System (ADS)
Hain, C.; Anderson, M. C.; Otkin, J.; Semmens, K. A.; Zhan, X.; Fang, L.; Li, Z.
2014-12-01
As the world's water resources come under increasing tension due to the dual stressors of climate change and population growth, accurate knowledge of water consumption through evapotranspiration (ET) over a range in spatial scales will be critical in developing adaptation strategies. However, direct validation of ET models is challenging due to lack of available observations that are sufficiently representative at the model grid scale (10-100 km). Prognostic land-surface models require accurate information about observed precipitation, soil moisture storage, groundwater, and artificial controls on water supply (e.g., irrigation, dams, etc.) to reliably link rainfall to evaporative fluxes. In contrast, diagnostic estimates of ET can be generated, with no prior knowledge of the surface moisture state, by energy balance models using thermal-infrared remote sensing of land-surface temperature (LST) as a boundary condition. One such method, the Atmosphere Land Exchange Inverse (ALEXI) model provides estimates of surface energy fluxes through the use of mid-morning change in LST and radiation inputs. The LST inputs carry valuable proxy information regarding soil moisture and its effect on soil evaporation and canopy transpiration. Additionally, the Evaporative Stress Index (ESI) representing anomalies in the ratio of actual-to-potential ET has shown to be a reliable indicator of drought. ESI maps over the continental US show good correspondence with standard drought metrics and with patterns of precipitation, but can be generated at significantly higher spatial resolution due to a limited reliance on ground observations. Furthermore, ESI is a measure of actual stress rather than potential for stress, and has physical relevance to projected crop development. Because precipitation is not used in construction of the ESI, it provides an independent assessment of drought conditions and has particular utility for real-time monitoring in regions with sparse rainfall data or significant delays in meteorological reporting. An initial analysis of a new prototype global ALEXI system using twice-daily observations of MODIS LST will be presented. The newly generated global ET and ESI datasets will be compared to other globally available ET and drought products during a multi-year evaluation period (2000-2013).
Noniterative accurate algorithm for the exact exchange potential of density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinal, M.; Holas, A.
2007-10-15
An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less
Modeling haplotype block variation using Markov chains.
Greenspan, G; Geiger, D
2006-04-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity.
Modeling Haplotype Block Variation Using Markov Chains
Greenspan, G.; Geiger, D.
2006-01-01
Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity. PMID:16361244
Validation of satellite daily rainfall estimates in complex terrain of Bali Island, Indonesia
NASA Astrophysics Data System (ADS)
Rahmawati, Novi; Lubczynski, Maciek W.
2017-11-01
Satellite rainfall products have different performances in different geographic regions under different physical and climatological conditions. In this study, the objective was to select the most reliable and accurate satellite rainfall products for specific, environmental conditions of Bali Island. The performances of four spatio-temporal satellite rainfall products, i.e., CMORPH25, CMORPH8, TRMM, and PERSIANN, were evaluated at the island, zonation (applying elevation and climatology as constraints), and pixel scales, using (i) descriptive statistics and (ii) categorical statistics, including bias decomposition. The results showed that all the satellite products had low accuracy because of spatial scale effect, daily resolution and the island complexity. That accuracy was relatively lower in (i) dry seasons and dry climatic zones than in wet seasons and wet climatic zones; (ii) pixels jointly covered by sea and mountainous land than in pixels covered by land or by sea only; and (iii) topographically diverse than uniform terrains. CMORPH25, CMORPH8, and TRMM underestimated and PERSIANN overestimated rainfall when comparing them to gauged rain. The CMORPH25 had relatively the best performance and the PERSIANN had the worst performance in the Bali Island. The CMORPH25 had the lowest statistical errors, the lowest miss, and the highest hit rainfall events; it also had the lowest miss rainfall bias and was relatively the most accurate in detecting, frequent in Bali, ≤ 20 mm day-1 rain events. Lastly, the CMORPH25 coarse grid better represented rainfall events from coastal to inlands areas than other satellite products, including finer grid CMORPH8.
NASA Technical Reports Server (NTRS)
Seidel, D. A.
1994-01-01
The Program for Solving the General-Frequency Unsteady Two-Dimensional Transonic Small-Disturbance Equation, XTRAN2L, is used to calculate time-accurate, finite-difference solutions of the nonlinear, small-disturbance potential equation for two- dimensional transonic flow about airfoils. The code can treat forced harmonic, pulse, or aeroelastic transient type motions. XTRAN2L uses a transonic small-disturbance equation that incorporates a time accurate finite-difference scheme. Airfoil flow tangency boundary conditions are defined to include airfoil contour, chord deformation, nondimensional plunge displacement, pitch, and trailing edge control surface deflection. Forced harmonic motion can be based on: 1) coefficients of harmonics based on information from each quarter period of the last cycle of harmonic motion; or 2) Fourier analyses of the last cycle of motion. Pulse motion (an alternate to forced harmonic motion) in which the airfoil is given a small prescribed pulse in a given mode of motion, and the aerodynamic transients are calculated. An aeroelastic transient capability is available within XTRAN2L, wherein the structural equations of motion are coupled with the aerodynamic solution procedure for simultaneous time-integration. The wake is represented as a slit downstream of the airfoil trailing edge. XTRAN2L includes nonreflecting farfield boundary conditions. XTRAN2L was developed on a CDC CYBER mainframe running under NOS 2.4. It is written in FORTRAN 5 and uses overlays to minimize storage requirements. The program requires 120K of memory in overlayed form. XTRAN2L was developed in 1987.
Some Supporting Evidence for Accurate Multivariate Perceptions with Chernoff Faces, Project 547.
ERIC Educational Resources Information Center
Wainer, Howard
A scheme, using features in a cartoon-like human face to represent variables, is tested as to its ability to graphically depict multivariate data. A factor analysis of Harman's "24 Psychological Tests" was performed and yielded four orthogonal factors. Nose width represented the loading on Factor 1; eye size on Factor 2; curve of mouth…
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
Decomposition of complex microbial behaviors into resource-based stress responses
Carlson, Ross P.
2009-01-01
Motivation: Highly redundant metabolic networks and experimental data from cultures likely adapting simultaneously to multiple stresses can complicate the analysis of cellular behaviors. It is proposed that the explicit consideration of these factors is critical to understanding the competitive basis of microbial strategies. Results: Wide ranging, seemingly unrelated Escherichia coli physiological fluxes can be simply and accurately described as linear combinations of a few ecologically relevant stress adaptations. These strategies were identified by decomposing the central metabolism of E.coli into elementary modes (mathematically defined biochemical pathways) and assessing the resource investment cost–benefit properties for each pathway. The approach capitalizes on the inherent tradeoffs related to investing finite resources like nitrogen into different pathway enzymes when the pathways have varying metabolic efficiencies. The subset of ecologically competitive pathways represented 0.02% of the total permissible pathways. The biological relevance of the assembled strategies was tested against 10 000 randomly constructed pathway subsets. None of the randomly assembled collections were able to describe all of the considered experimental data as accurately as the cost-based subset. The results suggest these metabolic strategies are biologically significant. The current descriptions were compared with linear programming (LP)-based flux descriptions using the Euclidean distance metric. The current study's pathway subset described the experimental fluxes with better accuracy than the LP results without having to test multiple objective functions or constraints and while providing additional ecological insight into microbial behavior. The assembled pathways seem to represent a generalized set of strategies that can describe a wide range of microbial responses and hint at evolutionary processes where a handful of successful metabolic strategies are utilized simultaneously in different combinations to adapt to diverse conditions. Contact: rossc@biofilms.montana.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19008248
An investigation of the fluid-structure interaction of piston/cylinder interface
NASA Astrophysics Data System (ADS)
Pelosi, Matteo
The piston/cylinder lubricating interface represents one of the most critical design elements of axial piston machines. Being a pure hydrodynamic bearing, the piston/cylinder interface fulfills simultaneously a bearing and sealing function under oscillating load conditions. Operating in an elastohydrodynamic lubrication regime, it also represents one of the main sources of power loss due to viscous friction and leakage flow. An accurate prediction of the time changing tribological interface characteristics in terms of fluid film thickness, dynamic pressure field, load carrying ability and energy dissipation is necessary to create more efficient interface designs. The aim of this work is to deepen the understanding of the main physical phenomena defining the piston/cylinder fluid film and to discover the impact of surface elastic deformations and heat transfer on the interface behavior. For this purpose, a unique fully coupled multi-body dynamics model has been developed to capture the complex fluid-structure interaction phenomena affecting the non-isothermal fluid film conditions. The model considers the squeeze film effect due to the piston micro-motion and the change in fluid film thickness due to the solid boundaries elastic deformations caused by the fluid film pressure and by the thermal strain. The model has been verified comparing the numerical results with measurements taken on special designed test pumps. The fluid film calculated dynamic pressure and temperature fields have been compared. Further validation has been accomplished comparing piston/cylinder axial viscous friction forces with measured data. The model has been used to study the piston/cylinder interface behavior of an existing axial piston unit operating at high load conditions. Numerical results are presented in this thesis.
NASA Technical Reports Server (NTRS)
Kradinov, V.; Madenci, E.; Ambur, D. R.
2004-01-01
Although two-dimensional methods provide accurate predictions of contact stresses and bolt load distribution in bolted composite joints with multiple bolts, they fail to capture the effect of thickness on the strength prediction. Typically, the plies close to the interface of laminates are expected to be the most highly loaded, due to bolt deformation, and they are usually the first to fail. This study presents an analysis method to account for the variation of stresses in the thickness direction by augmenting a two-dimensional analysis with a one-dimensional through the thickness analysis. The two-dimensional in-plane solution method based on the combined complex potential and variational formulation satisfies the equilibrium equations exactly, and satisfies the boundary conditions and constraints by minimizing the total potential. Under general loading conditions, this method addresses multiple bolt configurations without requiring symmetry conditions while accounting for the contact phenomenon and the interaction among the bolts explicitly. The through-the-thickness analysis is based on the model utilizing a beam on an elastic foundation. The bolt, represented as a short beam while accounting for bending and shear deformations, rests on springs, where the spring coefficients represent the resistance of the composite laminate to bolt deformation. The combined in-plane and through-the-thickness analysis produces the bolt/hole displacement in the thickness direction, as well as the stress state in each ply. The initial ply failure predicted by applying the average stress criterion is followed by a simple progressive failure. Application of the model is demonstrated by considering single- and double-lap joints of metal plates bolted to composite laminates.
Taub, Mitchell E; Kristensen, Lisbeth; Frokjaer, Sven
2002-05-01
The solubility enhancing effects of various excipients, including their compatibility with in vitro permeability (P(app)) systems, was investigated using drugs representative of Biopharmaceutics Classification System (BCS) classes I-IV. Turbidimetric solubility determination using nephelometry and transport experiments using MDCK Strain I cell monolayers were employed. The highest usable concentration of each excipient [dimethyl sulfoxide (DMSO), ethanol, hydroxypropyl-beta-cyclodextrin (HPCD), and sodium taurocholate] was determined by monitoring apical (AP) to basolateral (BL) [14C]mannitol apparent permeability (P(app)) and the transepithelial electrical resistance (TEER) in transport experiments done at pH 6.0 and 7.4. The excipients were used in conjunction with compounds demonstrating relatively low aqueous solubility (amphotericin B, danazol, mefenamic acid, and phenytoin) in order to obtain a drug concentration >50 microM in the donor compartment. The addition of at least one of the selected excipients enhanced the solubility of the inherently poorly soluble compounds to >50 microM as determined via turbidimetric evaluation at pH 6.0 and 7.4. Ethanol and DMSO were found to be generally disruptive to the MDCK monolayer and were not nearly as useful as HPCD and sodium taurocholate. Sodium taurocholate (5 mM) was compatible with MDCK monolayers under all conditions investigated. Additionally, a novel in vitro system aimed at more accurately simulating in vivo conditions, i.e., a pH gradient (6.0 AP/7.4 BL), sodium taurocholate (5 mM, AP), and bovine serum albumin (0.25%, BL), was shown to generate more reliable P(app) values for compounds that are poorly soluble and/or highly protein bound.
NASA Astrophysics Data System (ADS)
Couasnon, Anaïs; Sebastian, Antonia; Morales-Nápoles, Oswaldo
2017-04-01
Recent research has highlighted the increased risk of compound flooding in the U.S. In coastal catchments, an elevated downstream water level, resulting from high tide and/or storm surge, impedes drainage creating a backwater effect that may exacerbate flooding in the riverine environment. Catchments exposed to tropical cyclone activity along the Gulf of Mexico and Atlantic coasts are particularly vulnerable. However, conventional flood hazard models focus mainly on precipitation-induced flooding and few studies accurately represent the hazard associated with the interaction between discharge and elevated downstream water levels. This study presents a method to derive stochastic boundary conditions for a coastal watershed. Mean daily discharge and maximum daily residual water levels are used to build a non-parametric Bayesian network (BN) based on copulas. Stochastic boundary conditions for the watershed are extracted from the BN and input into a 1-D process-based hydraulic model to obtain water surface elevations in the main channel of the catchment. The method is applied to a section of the Houston Ship Channel (Buffalo Bayou) in Southeast Texas. Data at six stream gages and two tidal stations are used to build the BN and 100-year joint return period events are modeled. We find that the dependence relationship between the daily residual water level and the mean daily discharge in the catchment can be represented by a Gumbel copula (Spearman's rank correlation coefficient of 0.31) and that they result in higher water levels in the mid- to upstream reaches of the watershed than when modeled independently. This indicates that conventional (deterministic) methods may underestimate the flood hazard associated with compound flooding in the riverine environment and that such interactions should not be neglected in future coastal flood hazard studies.
Williams, Paige L; Chernoff, Miriam; Angelidou, Konstantia; Brouwers, Pim; Kacanek, Deborah; Deygoo, Nagamah S; Nachman, Sharon; Gadow, Kenneth D
2013-07-01
Obtaining accurate estimates of mental health problems among youth perinatally infected with HIV (PHIV) helps clinicians develop targeted interventions but requires enrollment and retention of representative youth into research studies. The study design for IMPAACT P1055, a US-based, multisite prospective study of psychiatric symptoms among PHIV youth and uninfected controls aged 6 to 17 years old, is described. Participants were compared with nonparticipants by demographic characteristics and reasons were summarized for study refusal. Adjusted logistic regression models were used to evaluate the association of psychiatric symptoms and other factors with loss to follow-up (LTFU). Among 2281 youth screened between 2005 and 2006 at 29 IMPAACT research sites, 580 (25%) refused to participate, primarily because of time constraints. Among 1162 eligible youth approached, 582 (50%) enrolled (323 PHIV and 259 Control), with higher participation rates for Hispanic youth. Retention at 2 years was significantly higher for PHIV than Controls (84% vs 77%, P = 0.03). In logistic regression models adjusting for sociodemographic characteristics and HIV status, youth with any self-assessed psychiatric condition had higher odds of LTFU compared with those with no disorder (adjusted odds ratio = 1.56, 95% confidence interval: 1.00 to 2.43). Among PHIV youth, those with any psychiatric condition had 3-fold higher odds of LTFU (adjusted odds ratio = 3.11, 95% confidence interval: 1.61 to 6.01). Enrollment and retention of PHIV youth into mental health research studies is challenging for those with psychiatric conditions and may lead to underestimated risks for mental health problems. Creative approaches for engaging HIV-infected youth and their families are required for ensuring representative study populations.
An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors
NASA Astrophysics Data System (ADS)
Shen, Yanfei; Cui, Jie; Mohammadi, Saeed
2017-05-01
A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.
Giordano, Bruno L; Visell, Yon; Yao, Hsin-Yun; Hayward, Vincent; Cooperstock, Jeremy R; McAdams, Stephen
2012-05-01
Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.
2016-09-01
UNCLASSIFIED UNCLASSIFIED Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis...significant effect on the collapse strength and must be accurately represented in finite element analysis to obtain accurate results. Often it is necessary...to interpolate measurements from a relatively coarse grid to a refined finite element model and methods that have wide general acceptance are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Dudu; Yang, Sichun; Lu, Lanyuan
2016-06-20
Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Accurate thermoelastic tensor and acoustic velocities of NaCl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less
A physical-based gas-surface interaction model for rarefied gas flow simulation
NASA Astrophysics Data System (ADS)
Liang, Tengfei; Li, Qi; Ye, Wenjing
2018-01-01
Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.
Structural stability of DNA origami nanostructures in the presence of chaotropic agents.
Ramakrishnan, Saminathan; Krainer, Georg; Grundmeier, Guido; Schlierf, Michael; Keller, Adrian
2016-05-21
DNA origami represent powerful platforms for single-molecule investigations of biomolecular processes. The required structural integrity of the DNA origami may, however, pose significant limitations regarding their applicability, for instance in protein folding studies that require strongly denaturing conditions. Here, we therefore report a detailed study on the stability of 2D DNA origami triangles in the presence of the strong chaotropic denaturing agents urea and guanidinium chloride (GdmCl) and its dependence on concentration and temperature. At room temperature, the DNA origami triangles are stable up to at least 24 h in both denaturants at concentrations as high as 6 M. At elevated temperatures, however, structural stability is governed by variations in the melting temperature of the individual staple strands. Therefore, the global melting temperature of the DNA origami does not represent an accurate measure of their structural stability. Although GdmCl has a stronger effect on the global melting temperature, its attack results in less structural damage than observed for urea under equivalent conditions. This enhanced structural stability most likely originates from the ionic nature of GdmCl. By rational design of the arrangement and lengths of the individual staple strands used for the folding of a particular shape, however, the structural stability of DNA origami may be enhanced even further to meet individual experimental requirements. Overall, their high stability renders DNA origami promising platforms for biomolecular studies in the presence of chaotropic agents, including single-molecule protein folding or structural switching.
Multimodal representation of limb endpoint position in the posterior parietal cortex.
Shi, Ying; Apker, Gregory; Buneo, Christopher A
2013-04-01
Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.
Proposal and Evaluation of Subordinate Standard Solar Irradiance Spectra: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, Aron M; Wilbert, Stefan; Jessen, Wilko
This paper introduces a concept for global tilted irradiance (GTI) subordinate standard spectra to supplement the current standard spectra used in solar photovoltaic applications as defined in ASTM G173 and IEC60904. The proposed subordinate standard spectra correspond to atmospheric conditions and tilt angles that depart significantly from the main standard spectrum, and they can be used to more accurately represent various local conditions. For the definition of subordinate standard spectra cases with an elevation 1.5 km above sea level, the question arises whether the air mass should be calculated including a pressure correction or not. This study focuses on themore » impact of air mass used in standard spectra, and it uses data from 29 locations to examine which air mass is most appropriate for GTI and direct normal irradiance (DNI) spectra. Overall, it is found that the pressure-corrected air mass of 1.5 is most appropriate for DNI spectra. For GTI, a non-pressure-corrected air mass of 1.5 was found to be more appropriate.« less
Weber, Daniela; Davies, Michael J.; Grune, Tilman
2015-01-01
Protein oxidation is involved in regulatory physiological events as well as in damage to tissues and is thought to play a key role in the pathophysiology of diseases and in the aging process. Protein-bound carbonyls represent a marker of global protein oxidation, as they are generated by multiple different reactive oxygen species in blood, tissues and cells. Sample preparation and stabilization are key steps in the accurate quantification of oxidation-related products and examination of physiological/pathological processes. This review therefore focuses on the sample preparation processes used in the most relevant methods to detect protein carbonyls after derivatization with 2,4-dinitrophenylhydrazine with an emphasis on measurement in plasma, cells, organ homogenates, isolated proteins and organelles. Sample preparation, derivatization conditions and protein handling are presented for the spectrophotometric and HPLC method as well as for immunoblotting and ELISA. An extensive overview covering these methods in previously published articles is given for researchers who plan to measure protein carbonyls in different samples. PMID:26141921
Weber, Daniela; Davies, Michael J; Grune, Tilman
2015-08-01
Protein oxidation is involved in regulatory physiological events as well as in damage to tissues and is thought to play a key role in the pathophysiology of diseases and in the aging process. Protein-bound carbonyls represent a marker of global protein oxidation, as they are generated by multiple different reactive oxygen species in blood, tissues and cells. Sample preparation and stabilization are key steps in the accurate quantification of oxidation-related products and examination of physiological/pathological processes. This review therefore focuses on the sample preparation processes used in the most relevant methods to detect protein carbonyls after derivatization with 2,4-dinitrophenylhydrazine with an emphasis on measurement in plasma, cells, organ homogenates, isolated proteins and organelles. Sample preparation, derivatization conditions and protein handling are presented for the spectrophotometric and HPLC method as well as for immunoblotting and ELISA. An extensive overview covering these methods in previously published articles is given for researchers who plan to measure protein carbonyls in different samples. © 2015 Published by Elsevier Ltd.
Quantitative analysis of soil chromatography. I. Water and radionuclide transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeves, M.; Francis, C.W.; Duguid, J.O.
Soil chromatography has been used successfully to evaluate relative mobilities of pesticides and nuclides in soils. Its major advantage over the commonly used suspension technique is that it more accurately simulates field conditions. Under such conditions the number of potential exchange sites is limited both by the structure of the soil matrix and by the manner in which the carrier fluid moves through this structure. The major limitation of the chromatographic method, however, has been its qualitative nature. This document represents an effort to counter this objection. A theoretical basis is specified for the transport both of the carrier elutingmore » fluid and of the dissolved constituent. A computer program based on this theory is developed which optimizes the fit of theoretical data to experimental data by automatically adjusting the transport parameters, one of which is the distribution coefficient k/sub d/. This analysis procedure thus constitutes an integral part of the soil chromatographic method, by means of which mobilities of nuclides and other dissolved constituents in soils may be quantified.« less
NASA Technical Reports Server (NTRS)
Greenhagen, B. T.; Donaldson-Hanna, K. L.; Thomas, I. R.; Bowles, N. E.; Allen, C. C.; Pieters, C. M.; Paige, D. A.
2014-01-01
The Diviner Lunar Radiometer, onboard NASA's Lunar Reconnaissance Orbiter, has produced the first global, high resolution, thermal infrared observations of an airless body. The Moon, which is the most accessible member of this most abundant class of solar system objects, is also the only body for which we have extraterrestrial samples with known spatial context. Here we present the results of a comprehensive study to reproduce an accurate simulated lunar environment, evaluate the most appropriate sample and measurement conditions, collect thermal infrared spectra of a representative suite of Apollo soils, and correlate them with Diviner observations of the lunar surface. We find that analyses of Diviner observations of individual sampling stations and SLE measurements of returned Apollo soils show good agreement, while comparisons to thermal infrared reflectance under terrestrial conditions do not agree well, which underscores the need for SLE measurements and validates the Diviner compositional dataset. Future work includes measurement of additional soils in SLE and cross comparisons with measurements in JPL Simulated Airless Body Emission Laboratory (SABEL).
Vertical root fractures and their management
Khasnis, Sandhya Anand; Kidiyoor, Krishnamurthy Haridas; Patil, Anand Basavaraj; Kenganal, Smita Basavaraj
2014-01-01
Vertical root fractures associated with endodontically treated teeth and less commonly in vital teeth represent one of the most difficult clinical problems to diagnose and treat. In as much as there are no specific symptoms, diagnosis can be difficult. Clinical detection of this condition by endodontists is becoming more frequent, where as it is rather underestimated by the general practitioners. Since, vertical root fractures almost exclusively involve endodontically treated teeth; it often becomes difficult to differentiate a tooth with this condition from an endodontically failed one or one with concomitant periodontal involvement. Also, a tooth diagnosed for vertical root fracture is usually extracted, though attempts to reunite fractured root have been done in various studies with varying success rates. Early detection of a fractured root and extraction of the tooth maintain the integrity of alveolar bone for placement of an implant. Cone beam computed tomography has been shown to be very accurate in this regard. This article focuses on the diagnostic and treatment strategies, and discusses about predisposing factors which can be useful in the prevention of vertical root fractures. PMID:24778502
Role of beach morphology in wave overtopping hazard assessment
NASA Astrophysics Data System (ADS)
Phillips, Benjamin; Brown, Jennifer; Bidlot, Jean-Raymond; Plater, Andrew
2017-04-01
Understanding the role of beach morphology in controlling wave overtopping volume will further minimise uncertainties in flood risk assessments at coastal locations defended by engineered structures worldwide. XBeach is used to model wave overtopping volume for a 1:200 yr joint probability distribution of waves and water levels with measured, pre- and post-storm beach profiles. The simulation with measured bathymetry is repeated with and without morphological evolution enabled during the modelled storm event. This research assesses the role of morphology in controlling wave overtopping volumes for hazardous events that meet the typical design level of coastal defence structures. Results show disabling storm-driven morphology under-represents modelled wave overtopping volumes by up to 39% under high Hs conditions, and has a greater impact on the wave overtopping rate than the variability applied within the boundary conditions due to the range of wave-water level combinations that meet the 1:200 yr joint probability criterion. Accounting for morphology in flood modelling is therefore critical for accurately predicting wave overtopping volumes and the resulting flood hazard and to assess economic losses.
Effects of spatial training on transitive inference performance in humans and rhesus monkeys
Gazes, Regina Paxton; Lazareva, Olga F.; Bergene, Clara N.; Hampton, Robert R.
2015-01-01
It is often suggested that transitive inference (TI; if A>B and B>C then A>C) involves mentally representing overlapping pairs of stimuli in a spatial series. However, there is little direct evidence to unequivocally determine the role of spatial representation in TI. We tested whether humans and rhesus monkeys use spatial representations in TI by training them to organize seven images in a vertical spatial array. Then, we presented subjects with a TI task using these same images. The implied TI order was either congruent or incongruent with the order of the trained spatial array. Humans in the congruent condition learned premise pairs more quickly, and were faster and more accurate in critical probe tests, suggesting that the spatial arrangement of images learned during spatial training influenced subsequent TI performance. Monkeys first trained in the congruent condition also showed higher test trial accuracy when the spatial and inferred orders were congruent. These results directly support the hypothesis that humans solve TI problems by spatial organization, and suggest that this cognitive mechanism for inference may have ancient evolutionary roots. PMID:25546105
Additional extensions to the NASCAP computer code, volume 2
NASA Technical Reports Server (NTRS)
Stannard, P. R.; Katz, I.; Mandell, M. J.
1982-01-01
Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.
Characterization of normality of chaotic systems including prediction and detection of anomalies
NASA Astrophysics Data System (ADS)
Engler, Joseph John
Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.
An equation of state for high pressure-temperature liquids (RTpress) with application to MgSiO3 melt
NASA Astrophysics Data System (ADS)
Wolf, Aaron S.; Bower, Dan J.
2018-05-01
The thermophysical properties of molten silicates at extreme conditions are crucial for understanding the early evolution of Earth and other massive rocky planets, which is marked by giant impacts capable of producing deep magma oceans. Cooling and crystallization of molten mantles are sensitive to the densities and adiabatic profiles of high-pressure molten silicates, demanding accurate Equation of State (EOS) models to predict the early evolution of planetary interiors. Unfortunately, EOS modeling for liquids at high P-T conditions is difficult due to constantly evolving liquid structure. The Rosenfeld-Tarazona (RT) model provides a physically sensible and accurate description of liquids but is limited to constant volume heating paths (Rosenfeld and Tarazona, 1998). We develop a high P-T EOS for liquids, called RTpress, which uses a generalized Rosenfeld-Tarazona model as a thermal perturbation to isothermal and adiabatic reference compression curves. This approach provides a thermodynamically consistent EOS which remains accurate over a large P-T range and depends on a limited number of physically meaningful parameters that can be determined empirically from either simulated or experimental datasets. As a first application, we model MgSiO3 melt representing a simplified rocky mantle chemistry. The model parameters are fitted to the MD simulations of both Spera et al. (2011) and de Koker and Stixrude (2009), recovering pressures, volumes, and internal energies to within 0.6 GPa, 0.1 Å3 , and 6 meV per atom on average (for the higher resolution data set), as well as accurately predicting liquid densities and temperatures from shock-wave experiments on MgSiO3 glass. The fitted EOS is used to determine adiabatic thermal profiles, revealing the approximate thermal structure of a fully molten magma ocean like that of the early Earth. These adiabats, which are in strong agreement for both fitted models, are shown to be sufficiently steep to produce either a center-outwards or bottom-up style of crystallization, depending on the curvature of the mantle melting curve (liquidus), with a high-curvature model yielding crystallization at depths of roughly 80 GPa (Stixrude et al., 2009) whereas a nearly-flat experimentally determined liquidus implies bottom-up crystallization (Andrault et al., 2011).
Representativeness Uncertainty in Chemical Data Assimilation Highlight Mixing Barriers
NASA Technical Reports Server (NTRS)
Lary, David John
2003-01-01
When performing chemical data assimilation the observational, representativeness, and theoretical uncertainties have very different characteristics. In this study we have accurately characterized the representativeness uncertainty by studying the probability distribution function (PDF) of the observations. The average deviation has been used as a measure of the width of the PDF and of the variability (representativeness uncertainty) for the grid cell. It turns out that for long-lived tracers such as N2O and CH4 the representativeness uncertainty is markedly different from the observational uncertainty and clearly delineates mixing barriers such as the polar vortex edge, the tropical pipe and the tropopause.
NASA Astrophysics Data System (ADS)
Ni, Fang; Nakatsukasa, Takashi
2018-04-01
To describe quantal collective phenomena, it is useful to requantize the time-dependent mean-field dynamics. We study the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory for the two-level pairing Hamiltonian, and compare results of different quantization methods. The one constructing microscopic wave functions, using the TDHFB trajectories fulfilling the Einstein-Brillouin-Keller quantization condition, turns out to be the most accurate. The method is based on the stationary-phase approximation to the path integral. We also examine the performance of the collective model which assumes that the pairing gap parameter is the collective coordinate. The applicability of the collective model is limited for the nuclear pairing with a small number of single-particle levels, because the pairing gap parameter represents only a half of the pairing collective space.
Microfluidic Organ/Body-on-a-Chip Devices at the Convergence of Biology and Microengineering.
Perestrelo, Ana Rubina; Águas, Ana C P; Rainer, Alberto; Forte, Giancarlo
2015-12-10
Recent advances in biomedical technologies are mostly related to the convergence of biology with microengineering. For instance, microfluidic devices are now commonly found in most research centers, clinics and hospitals, contributing to more accurate studies and therapies as powerful tools for drug delivery, monitoring of specific analytes, and medical diagnostics. Most remarkably, integration of cellularized constructs within microengineered platforms has enabled the recapitulation of the physiological and pathological conditions of complex tissues and organs. The so-called "organ-on-a-chip" technology, which represents a new avenue in the field of advanced in vitro models, with the potential to revolutionize current approaches to drug screening and toxicology studies. This review aims to highlight recent advances of microfluidic-based devices towards a body-on-a-chip concept, exploring their technology and broad applications in the biomedical field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prince, K.R.; Schneider, B.J.
This study obtained estimates of the hydraulic properties of the upper glacial and Magothy aquifers in the East Meadow area for use in analyzing the movement of reclaimed waste water through the aquifer system. This report presents drawdown and recovery data form the two aquifer tests of 1978 and 1985, describes the six methods of analysis used, and summarizes the results of the analyses in tables and graphs. The drawdown and recovery data were analyzed through three simple analytical equations, two curve-matching techniques, and a finite-element radial-flow model. The resulting estimates of hydraulic conductivity, anisotropy, and storage characteristics were usedmore » as initial input values to the finite-element radial-flow model (Reilly, 1984). The flow model was then used to refine the estimates of the aquifer properties by more accurately representing the aquifer geometry and field conditions of the pumping tests.« less
Soil erosion assessment - Mind the gap
NASA Astrophysics Data System (ADS)
Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone
2016-12-01
Accurate assessment of erosion rates remains an elusive problem because soil loss is strongly nonunique with respect to the main drivers. In addressing the mechanistic causes of erosion responses, we discriminate between macroscale effects of external factors - long studied and referred to as "geomorphic external variability", and microscale effects, introduced as "geomorphic internal variability." The latter source of erosion variations represents the knowledge gap, an overlooked but vital element of geomorphic response, significantly impacting the low predictability skill of deterministic models at field-catchment scales. This is corroborated with experiments using a comprehensive physical model that dynamically updates the soil mass and particle composition. As complete knowledge of microscale conditions for arbitrary location and time is infeasible, we propose that new predictive frameworks of soil erosion should embed stochastic components in deterministic assessments of external and internal types of geomorphic variability.
NASA Technical Reports Server (NTRS)
Packard, James D.; Griffith, Wayland C.; Yates, Leslie A.; Strawa, Anthony W.
1992-01-01
Missions to Mars require the successful development of aerobraking technology, and therefore a blunt cone representative of aerobrake shapes is investigated. Ballistic tests of the Pioneer Venus configuration are conducted in carbon dioxide and air at Mach numbers from 7 to 20 and Reynolds numbers from 0.1 x 10 exp 5 to 4 x 10 exp 6. Experimental results show that for defined conditions aerodynamic research can be conducted in air rather than carbon dioxide, providing savings in time and money. In addition, the results offer a prediction of flight aerodynamics during entry into the Martian atmosphere. Also discussed is a comparison of results from two data-reduction techniques showing that a five-degree-of-freedom routine employing weighted least-squares with differential corrections analyzes ballistic data more accurately.
Thermal margin protection system for a nuclear reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musick, C.R.
1974-02-12
A thermal margin protection system for a nuclear reactor is described where the coolant flow flow trip point and the calculated thermal margin trip point are switched simultaneously and the thermal limit locus is made more restrictive as the allowable flow rate is decreased. The invention is characterized by calculation of the thermal limit Locus in response to applied signals which accurately represent reactor cold leg temperature and core power; cold leg temperature being corrected for stratification before being utilized and reactor power signals commensurate with power as a function of measured neutron flux and thermal energy added to themore » coolant being auctioneered to select the more conservative measure of power. The invention further comprises the compensation of the selected core power signal for the effects of core radial peaking factor under maximum coolant flow conditions. (Official Oazette)« less
Debris flows: behavior and hazard assessment
Iverson, Richard M.
2014-01-01
Debris flows are water-laden masses of soil and fragmented rock that rush down mountainsides, funnel into stream channels, entrain objects in their paths, and form lobate deposits when they spill onto valley floors. Because they have volumetric sediment concentrations that exceed 40 percent, maximum speeds that surpass 10 m/s, and sizes that can range up to ~109 m3, debris flows can denude slopes, bury floodplains, and devastate people and property. Computational models can accurately represent the physics of debris-flow initiation, motion and deposition by simulating evolution of flow mass and momentum while accounting for interactions of debris' solid and fluid constituents. The use of physically based models for hazard forecasting can be limited by imprecise knowledge of initial and boundary conditions and material properties, however. Therefore, empirical methods continue to play an important role in debris-flow hazard assessment.
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1981-01-01
The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.
Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen
2012-01-01
Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID:22952972
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
Atmospheric Science Data Center
2013-08-06
... from the GEWEX Radiation Panel (now the GEWEX Data and Assessment Panel - GDAP) received at the 2011 meeting. The panel stressed the need for algorithm names to more accurately represent the parameterized ...
Method for Determination of the Wind Velocity and Direction
NASA Technical Reports Server (NTRS)
Dahlin, Goesta Johan
1988-01-01
Accurate determination of the position of an artillery piece, for example, using sound measurement systems through measurement of the muzzle noise requires access to wind data that is representative of the portion of the air from where the sound wave is propagated up the microphone base of the system. The invention provides a system for determining such representative wind data.
NASA Astrophysics Data System (ADS)
Cheng, Rita W. T.; Habib, Ayman F.; Frayne, Richard; Ronsky, Janet L.
2006-03-01
In-vivo quantitative assessments of joint conditions and health status can help to increase understanding of the pathology of osteoarthritis, a degenerative joint disease that affects a large population each year. Magnetic resonance imaging (MRI) provides a non-invasive and accurate means to assess and monitor joint properties, and has become widely used for diagnosis and biomechanics studies. Quantitative analyses and comparisons of MR datasets require accurate alignment of anatomical structures, thus image registration becomes a necessary procedure for these applications. This research focuses on developing a registration technique for MR knee joint surfaces to allow quantitative study of joint injuries and health status. It introduces a novel idea of translating techniques originally developed for geographic data in the field of photogrammetry and remote sensing to register 3D MR data. The proposed algorithm works with surfaces that are represented by randomly distributed points with no requirement of known correspondences. The algorithm performs matching locally by identifying corresponding surface elements, and solves for the transformation parameters relating the surfaces by minimizing normal distances between them. This technique was used in three applications to: 1) register temporal MR data to verify the feasibility of the algorithm to help monitor diseases, 2) quantify patellar movement with respect to the femur based on the transformation parameters, and 3) quantify changes in contact area locations between the patellar and femoral cartilage at different knee flexion angles. The results indicate accurate registration and the proposed algorithm can be applied for in-vivo study of joint injuries with MRI.
Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.
2014-01-01
The extraction of information from images acquired under low light conditions represents a common task in diverse disciplines. In single molecule microscopy, for example, techniques for superresolution image reconstruction depend on the accurate estimation of the locations of individual particles from generally low light images. In order to estimate a quantity of interest with high accuracy, however, an appropriate model for the image data is needed. To this end, we previously introduced a data model for an image that is acquired using the electron-multiplying charge-coupled device (EMCCD) detector, a technology of choice for low light imaging due to its ability to amplify weak signals significantly above its readout noise floor. Specifically, we proposed the use of a geometrically multiplied branching process to model the EMCCD detector’s stochastic signal amplification. Geometric multiplication, however, can be computationally expensive and challenging to work with analytically. We therefore describe here two approximations for geometric multiplication that can be used instead. The high gain approximation is appropriate when a high level of signal amplification is used, a scenario which corresponds to the typical usage of an EMCCD detector. It is an accurate approximation that is computationally more efficient, and can be used to perform maximum likelihood estimation on EMCCD image data. In contrast, the Gaussian approximation is applicable at all levels of signal amplification, but is only accurate when the initial signal to be amplified is relatively large. As we demonstrate, it can importantly facilitate the analysis of an information-theoretic quantity called the noise coefficient. PMID:25075263
NASA Astrophysics Data System (ADS)
Heiselman, Jon S.; Collins, Jarrod A.; Clements, Logan W.; Weis, Jared A.; Simpson, Amber L.; Geevarghese, Sunil K.; Jarnagin, William R.; Miga, Michael I.
2017-03-01
In order to rigorously validate techniques for image-guided liver surgery (IGLS), an accurate mock representation of the intraoperative surgical scene with quantifiable localization of subsurface targets would be highly desirable. However, many attempts to reproduce the laparoscopic environment have encountered limited success due to neglect of several crucial design aspects. The laparoscopic setting is complicated by factors such as gas insufflation of the abdomen, changes in patient orientation, incomplete organ mobilization from ligaments, and limited access to organ surface data. The ability to accurately represent the influences of anatomical changes and procedural limitations is critical for appropriate evaluation of IGLS methodologies such as registration and deformation correction. However, these influences have not yet been comprehensively integrated into a platform usable for assessment of methods in laparoscopic IGLS. In this work, a mock laparoscopic liver simulator was created with realistic ligamenture to emulate the complexities of this constrained surgical environment for the realization of laparoscopic IGLS. The mock surgical system reproduces an insufflated abdominal cavity with dissectible ligaments, variable levels of incline matching intraoperative patient positioning, and port locations in accordance with surgical protocol. True positions of targets embedded in a tissue-mimicking phantom are measured from CT images. Using this setup, image-to-physical registration accuracy was evaluated for simulations of laparoscopic right and left lobe mobilization to assess rigid registration performance under more realistic laparoscopic conditions. Preliminary results suggest that non-rigid organ deformations and the region of organ surface data collected affect the ability to attain highly accurate registrations in laparoscopic applications.
NASA Astrophysics Data System (ADS)
Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.
2016-04-01
Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. For that purpose, the Natural Resources Conservation Service Curve Number method (NRCS CN) is the conceptual lumped model more recognized in the field of rainfall-runoff estimation. Furthermore, there is still an ongoing discussion about the procedure to determine the portion of rainfall retained in the watershed before runoff is generated, called as initial abstractions. This concept is computed as a ratio (λ) of the soil potential maximum retention S of the watershed. Initially, this ratio was assumed to be 0.2, but later it has been proposed to be modified to 0.05. However, the actual procedures to convert NRCS CN model parameters obtained under a different hypothesis about λ do not incorporate any adaptation of climatic conditions of each watershed. By this reason, we propose a new simple method for computing model parameters which is adapted to local conditions taking into account regional patterns of climate conditions. After checking the goodness of this procedure against the actual ones in 34 different watersheds located in Ohio and Texas (United States), we concluded that this novel methodology represents the most accurate and efficient alternative to refit the initial abstraction ratio.
Sun, Qi; Liu, Jinlong; Qian, Yi; Hong, Haifa; Liu, Jinfen
2013-01-01
In this study, we performed computational fluid dynamic (CFD) simulations in a patient-specific three-dimensional extracardiac conduit Fontan connection. The pulmonary resistance was incorporated in the CFD model by connecting porous portions in the left and right pulmonary arteries. The pressure in the common atrium was set as boundary conditions at the outlets of the pulmonary arteries. The flow rate in the innominate veins and the inferior vena cava (IVC) was set as inflow boundary conditions. Furthermore, the inflow rate of IVC was increased to 2 and 3 times of that measured to perform another two simulations and the resistance provided by the porous portions was compared among these three conditions. We found out that the pulmonary resistance set as porous portion in the CFD models remains relatively steady despite the change of the inflow rate. We concluded that, in the CFD simulations for the Fontan connections, porous portion could be used to represent pulmonary resistance steadily. The pulmonary resistance and pressure in the common atrium could be acquired directly by clinical examination. The employment of porous portion together with pressure in the common atrium in the CFD model could facilitate and accurate the set of outlet boundary conditions especially for those actual pulmonary flow splits was unpredictable such as virtual operative designs related CFD simulations.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Relationship of physiography and snow area to stream discharge. [Kings River Watershed, California
NASA Technical Reports Server (NTRS)
Mccuen, R. H. (Principal Investigator)
1979-01-01
The author has identified the following significant results. A comparison of snowmelt runoff models shows that the accuracy of the Tangborn model and regression models is greater if the test data falls within the range of calibration than if the test data lies outside the range of calibration data. The regression models are significantly more accurate for forecasts of 60 days or more than for shorter prediction periods. The Tangborn model is more accurate for forecasts of 90 days or more than for shorter prediction periods. The Martinec model is more accurate for forecasts of one or two days than for periods of 3,5,10, or 15 days. Accuracy of the long-term models seems to be independent of forecast data. The sufficiency of the calibration data base is a function not only of the number of years of record but also of the accuracy with which the calibration years represent the total population of data years. Twelve years appears to be a sufficient length of record for each of the models considered, as long as the twelve years are representative of the population.
2015-03-26
Statement It is very difficult to obtain and use spectral BRDFs due to aforementioned reasons, while the need to accurately model the spectral and...Lambertian and MERL nickel-shaped BRDF models (Butler, 2014:1- 3 10), suggesting that accurate BRDFs are required to account for the significance of... BRDF -based radiative transfer models . Research Objectives /Hypotheses The need to represent the spectral reflected radiance of a material using
Farkas, Gary M.; Tharp, Roland G.
1980-01-01
Several factors thought to influence the representativeness of behavioral assessment data were examined in an analogue study using a multifactorial design. Systematic and unsystematic methods of observing group behavior were investigated using 18 male and 18 female observers. Additionally, valence properties of the observed behaviors were inspected. Observers' assessments of a videotape were compared to a criterion code that defined the population of behaviors. Results indicated that systematic observation procedures were more accurate than unsystematic procedures, though this factor interacted with gender of observer and valence of behavior. Additionally, males tended to sample more representatively than females. A third finding indicated that the negatively valenced behavior was overestimated, whereas the neutral and positively valenced behaviors were accurately assessed. PMID:16795631
Protein Structure Determination using Metagenome sequence data
Ovchinnikov, Sergey; Park, Hahnbeom; Varghese, Neha; Huang, Po-Ssu; Pavlopoulos, Georgios A.; Kim, David E.; Kamisetty, Hetunandan; Kyrpides, Nikos C.; Baker, David
2017-01-01
Despite decades of work by structural biologists, there are still ~5200 protein families with unknown structure outside the range of comparative modeling. We show that Rosetta structure prediction guided by residue-residue contacts inferred from evolutionary information can accurately model proteins that belong to large families, and that metagenome sequence data more than triples the number of protein families with sufficient sequences for accurate modeling. We then integrate metagenome data, contact based structure matching and Rosetta structure calculations to generate models for 614 protein families with currently unknown structures; 206 are membrane proteins and 137 have folds not represented in the PDB. This approach provides the representative models for large protein families originally envisioned as the goal of the protein structure initiative at a fraction of the cost. PMID:28104891
Anoxic denitrification of BTEX: Biodegradation kinetics and pollutant interactions.
Carvajal, Andrea; Akmirza, Ilker; Navia, Daniel; Pérez, Rebeca; Muñoz, Raúl; Lebrero, Raquel
2018-05-15
Anoxic mineralization of BTEX represents a promising alternative for their abatement from O 2 -deprived emissions. However, the kinetics of anoxic BTEX biodegradation and the interactions underlying the treatment of BTEX mixtures are still unknown. An activated sludge inoculum was used for the anoxic abatement of single, dual and quaternary BTEX mixtures, being acclimated prior performing the biodegradation kinetic tests. The Monod model and a Modified Gompertz model were then used for the estimation of the biodegradation kinetic parameters. Results showed that both toluene and ethylbenzene are readily biodegradable under anoxic conditions, whereas the accumulation of toxic metabolites resulted in partial xylene and benzene degradation when present both as single components or in mixtures. Moreover, the supplementation of an additional pollutant always resulted in an inhibitory competition, with xylene inducing the highest degree of inhibition. The Modified Gompertz model provided an accurate fitting for the experimental data for single and dual substrate experiments, satisfactorily representing the antagonistic pollutant interactions. Finally, microbial analysis suggested that the degradation of the most biodegradable compounds required a lower microbial specialization and diversity, while the presence of the recalcitrant compounds resulted in the selection of a specific group of microorganisms. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Suture simulator - Cleft palate surgery].
Devinck, F; Riot, S; Qassemyar, A; Belkhou, A; Wolber, A; Martinot Duquennoy, V; Guerreschi, P
2017-04-01
Cleft palate requires surgery in the first years of life, furthermore repairing anatomically the soft and hard palate is complex on a surgical level because of the fine tissues and the local intraoral configuration. It is valuable to train first on simulators before going to the operating room. However, there is no material dedicated to learning how to perform intraoral sutures in cleft palate surgery. We made one, in an artisanal manner, in order to practice before the real surgical gesture. The simulator was designed based on precise anatomical data. A steel pipe, fixed on a rigid base represented the oral cavity. An adapted split spoon represented the palate. All pieces could be removed in order to apply a hydrocellular dressing before training for sutures. Our simulator was tested by 3 senior surgeons in our department in close to real-life conditions in order to evaluate its anatomical accuracy. It is valuable to have a simulator to train on cleft palate sutures within teaching university hospitals that manage this pathology. Our simulator has a very low cost, it is easy to make and is anatomically accurate. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Gray, W H; Hallissy, J M , Jr
1950-01-01
Data on the aerodynamic excitation of first-order vibration occurring in a representative three-blade propeller having its thrust axis inclined to the air stream at angles of 0 degrees, 4.55 degrees, and 9.8 degrees are included in this paper. For several representative conditions the aerodynamic excitation has been computed and compared with the measured values. Blade stresses also were measured to permit the evaluation of the blade stress resulting from a given blade aerodynamic excitation. It was concluded that the section aerodynamic exciting force of a pitched propeller may be computed accurately at low rotational speeds. As section velocities approach the speed of sound, the accuracy of computation of section aerodynamic exciting force is not always so satisfactory. First-order blade vibratory stresses were computed with satisfactory accuracy from untilted-propeller loading data. A stress prediction which assumes a linear relation between first-order vibratory stress and the product of pitch angle and dynamic pressure and which is based on stresses at low rotational speeds will be conservative when the outer portions of the blade are in the transonic and low supersonic speed range.
NASA Astrophysics Data System (ADS)
Shellnutt, J. G.; Pham, Thuy T.
2018-05-01
The Late Permian Emeishan large igneous province (ELIP) is considered to be one of the best examples of a mantle plume derived large igneous province. One of the primary observations that favour a mantle plume regime is the presence of ultramafic volcanic rocks. The picrites suggest primary mantle melts erupted and that mantle potential temperatures (TP) of the ELIP were > 200oC above ambient mantle conditions. However, they may represent a mixture of liquid and cumulus olivine and pyroxene rather than primary liquids. Consequently, temperature estimates based on the picrite compositions may not be accurate. Here we calculate mantle potential temperature (TP) estimates and primary liquids compositions using PRIMELT3 for the low-Ti (Ti/Y < 500) Emeishan basalt as they represent definite liquid compositions. The calculated TP yield a range from 1400oC to 1550oC, which is consistent with variability across a mantle plume axis. The primary melt compositions of the basalts are mostly picritic. The results of this study indicate that the Emeishan basalt was produced by a high temperature regime and that a few of the ultramafic volcanic rocks may be indicative of primary liquids.
NASA Technical Reports Server (NTRS)
Kopasakis, George
2010-01-01
Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying integrated couplings between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms. Then a generalized formulation is developed in frequency domain for these scale models that approximates the fractional order with the products of first order transfer functions. Given the parameters describing the conditions of atmospheric disturbances and utilizing the derived formulations, the objective is to directly compute the transfer functions that describe these disturbances for acoustic velocity, temperature, pressure and density. Utilizing these computed transfer functions and choosing the disturbance frequencies of interest, time domain simulations of these representative atmospheric turbulences can be developed. These disturbance representations are then used to first develop considerations for disturbance rejection specifications for the design of the propulsion control system, and then to evaluate the closed-loop performance.
Boone, D.D.; Sauer, J.R.; Thomas, I.; Handley, Lawrence R.; D'Erchia, Frank J.; Charron, Tammy M.
2000-01-01
The North American Breeding Bird Survey (BBS) has received criticism that the bird habitat sampled along the 24.5 mile long roadside transects may not be proportional to regional totals. If true, trends in bird populations recorded by the BBS may not be sensitive predictors of regional or continental change in songbird abundance. To test whether the approximately 60 BBS routes in Maryland representatively sample the state's habitat, a geographic information system (GIS) database was compiled of significant bird habitat identified from remotely sensed landcover and land-use information (e.g., Multi-Resolution Land Characteristics Consortiumclassified Landsat Thematic Mapper imagery, etc.). These GIS data layers were analyzed to determine the statewide acreage of identified habitats as well as the acreage in each of the major physiographic regions of Maryland. Regional and statewide totals were also extracted for the subsample of habitat within 30 m of the BBS transects. The results of the comparison of regional and statewide habitat totals with the BBS sample showed very low proportional difference for nearly all of the identified habitat parameters. For Maryland and perhaps other urbanizing states, the BBS provides an accurate sample of available songbird habitats.
Constitutive Soil Properties for Cuddeback Lake, California and Carson Sink, Nevada
NASA Technical Reports Server (NTRS)
Thomas, Michael A.; Chitty, Daniel E.; Gildea, Martin L.; T'Kindt, Casey M.
2008-01-01
Accurate soil models are required for numerical simulations of land landings for the Orion Crew Exploration Vehicle. This report provides constitutive material modeling properties for four soil models from two dry lakebeds in the western United States. The four soil models are based on mechanical and compressive behavior observed during geotechnical laboratory testing of remolded soil samples from the lakebeds. The test specimens were reconstituted to measured in situ density and moisture content. Tests included: triaxial compression, hydrostatic compression, and uniaxial strain. A fit to the triaxial test results defines the strength envelope. Hydrostatic and uniaxial tests define the compressibility. The constitutive properties are presented in the format of LS-DYNA Material Model 5: Soil and Foam. However, the laboratory test data provided can be used to construct other material models. The four soil models are intended to be specific only to the two lakebeds discussed in the report. The Cuddeback A and B models represent the softest and hardest soils at Cuddeback Lake. The Carson Sink Wet and Dry models represent different seasonal conditions. It is possible to approximate other clay soils with these models, but the results would be unverified without geotechnical tests to confirm similar soil behavior.
Future directions of meteorology related to air-quality research.
Seaman, Nelson L
2003-06-01
Meteorology is one of the major factors contributing to air-pollution episodes. More accurate representation of meteorological fields has been possible in recent years through the use of remote sensing systems, high-speed computers and fine-mesh meteorological models. Over the next 5-20 years, better meteorological inputs for air quality studies will depend on making better use of a wealth of new remotely sensed observations in more advanced data assimilation systems. However, for fine mesh models to be successful, parameterizations used to represent physical processes must be redesigned to be more precise and better adapted for the scales at which they will be applied. Candidates for significant overhaul include schemes to represent turbulence, deep convection, shallow clouds, and land-surface processes. Improvements in the meteorological observing systems, data assimilation and modeling, coupled with advancements in air-chemistry modeling, will soon lead to operational forecasting of air quality in the US. Predictive capabilities can be expected to grow rapidly over the next decade. This will open the way for a number of valuable new services and strategies, including better warnings of unhealthy atmospheric conditions, event-dependent emissions restrictions, and now casting support for homeland security in the event of toxic releases into the atmosphere.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
Queries over Unstructured Data: Probabilistic Methods to the Rescue
NASA Astrophysics Data System (ADS)
Sarawagi, Sunita
Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.
Can temporal fine structure represent the fundamental frequency of unresolved harmonics?
Oxenham, Andrew J; Micheyl, Christophe; Keebler, Michael V
2009-04-01
At least two modes of pitch perception exist: in one, the fundamental frequency (F0) of harmonic complex tones is estimated using the temporal fine structure (TFS) of individual low-order resolved harmonics; in the other, F0 is derived from the temporal envelope of high-order unresolved harmonics that interact in the auditory periphery. Pitch is typically more accurate in the former than in the latter mode. Another possibility is that pitch can sometimes be coded via the TFS from unresolved harmonics. A recent study supporting this third possibility [Moore et al. (2006a). J. Acoust. Soc. Am. 119, 480-490] based its conclusion on a condition where phase interaction effects (implying unresolved harmonics) accompanied accurate F0 discrimination (implying TFS processing). The present study tests whether these results were influenced by audible distortion products. Experiment 1 replicated the original results, obtained using a low-level background noise. However, experiments 2-4 found no evidence for the use of TFS cues with unresolved harmonics when the background noise level was raised, or the stimulus level was lowered, to render distortion inaudible. Experiment 5 measured the presence and phase dependence of audible distortion products. The results provide no evidence that TFS cues are used to code the F0 of unresolved harmonics.
Quasi-steady aerodynamic model of clap-and-fling flapping MAV and validation using free-flight data.
Armanini, S F; Caetano, J V; Croon, G C H E de; Visser, C C de; Mulder, M
2016-06-30
Flapping-wing aerodynamic models that are accurate, computationally efficient and physically meaningful, are challenging to obtain. Such models are essential to design flapping-wing micro air vehicles and to develop advanced controllers enhancing the autonomy of such vehicles. In this work, a phenomenological model is developed for the time-resolved aerodynamic forces on clap-and-fling ornithopters. The model is based on quasi-steady theory and accounts for inertial, circulatory, added mass and viscous forces. It extends existing quasi-steady approaches by: including a fling circulation factor to account for unsteady wing-wing interaction, considering real platform-specific wing kinematics and different flight regimes. The model parameters are estimated from wind tunnel measurements conducted on a real test platform. Comparison to wind tunnel data shows that the model predicts the lift forces on the test platform accurately, and accounts for wing-wing interaction effectively. Additionally, validation tests with real free-flight data show that lift forces can be predicted with considerable accuracy in different flight regimes. The complete parameter-varying model represents a wide range of flight conditions, is computationally simple, physically meaningful and requires few measurements. It is therefore potentially useful for both control design and preliminary conceptual studies for developing new platforms.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe
NASA Astrophysics Data System (ADS)
Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David
2018-06-01
An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.
Sartori, Massimo; Yavuz, Utku Ş; Farina, Dario
2017-10-18
Human motor function emerges from the interaction between the neuromuscular and the musculoskeletal systems. Despite the knowledge of the mechanisms underlying neural and mechanical functions, there is no relevant understanding of the neuro-mechanical interplay in the neuro-musculo-skeletal system. This currently represents the major challenge to the understanding of human movement. We address this challenge by proposing a paradigm for investigating spinal motor neuron contribution to skeletal joint mechanical function in the intact human in vivo. We employ multi-muscle spatial sampling and deconvolution of high-density fiber electrical activity to decode accurate α-motor neuron discharges across five lumbosacral segments in the human spinal cord. We use complete α-motor neuron discharge series to drive forward subject-specific models of the musculoskeletal system in open-loop with no corrective feedback. We perform validation tests where mechanical moments are estimated with no knowledge of reference data over unseen conditions. This enables accurate blinded estimation of ankle function purely from motor neuron information. Remarkably, this enables observing causal associations between spinal motor neuron activity and joint moment control. We provide a new class of neural data-driven musculoskeletal modeling formulations for bridging between movement neural and mechanical levels in vivo with implications for understanding motor physiology, pathology, and recovery.
A new adjustable gains for second order sliding mode control of saturated DFIG-based wind turbine
NASA Astrophysics Data System (ADS)
Bounadja, E.; Djahbar, A.; Taleb, R.; Boudjema, Z.
2017-02-01
The control of Doubly-Fed induction generator (DFIG), used in wind energy conversion, has been given a great deal of interest. Frequently, this control has been dealt with ignoring the magnetic saturation effect in the DFIG model. The aim of the present work is twofold: firstly, the magnetic saturation effect is accounted in the control design model; secondly, a new second order sliding mode control scheme using adjustable-gains (AG-SOSMC) is proposed to control the DFIG via its rotor side converter. This scheme allows the independent control of the generated active and reactive power. Conventionally, the second order sliding mode control (SOSMC) applied to the DFIG, utilize the super-twisting algorithm with fixed gains. In the proposed AG-SOSMC, a simple means by which the controller can adjust its behavior is used. For that, a linear function is used to represent the variation in gain as a function of the absolute value of the discrepancy between the reference rotor current and its measured value. The transient DFIG speed response using the aforementioned characteristic is compared with the one determined by using the conventional SOSMC controller with fixed gains. Simulation results show, accurate dynamic performances, quicker transient response and more accurate control are achieved for different operating conditions.
Mechanisms and chemical induction of aneuploidy in rodent germ cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mailhes, J B; Marchetti, F
The objective of this review is to suggest that the advances being made in our understanding of the molecular events surrounding chromosome segregation in non-mammalian and somatic cell models be considered when designing experiments for studying aneuploidy in mammalian germ cells. Accurate chromosome segregation requires the temporal control and unique interactions among a vast array of proteins and cellular organelles. Abnormal function and temporal disarray among these, and others to be inidentified, biochemical reactions and cellular organelles have the potential for predisposing cells to aneuploidy. Although numerous studies have demonstrated that certain chemicals (mainly those that alter microtubule function) canmore » induce aneuploidy in mammalian germ cells, it seems relevant to point out that such data can be influenced by gender, meiotic stage, and time of cell-fixation post-treatment. Additionally, a consensus has not been reached regarding which of several germ cell aneuploidy assays most accurately reflects the human condition. More recent studies have shown that certain kinase, phosphatase, proteasome, and topoisomerase inhibitors can also induce aneuploidy in rodent germ cells. We suggest that molecular approaches be prudently incorporated into mammalian germ cell aneuploidy research in order to eventually understand the causes and mechanisms of human aneuploidy. Such an enormous undertaking would benefit from collaboration among scientists representing several disciplines.« less
Misquitta, Alston J; Stone, Anthony J; Price, Sarah L
2008-01-01
In part 1 of this two-part investigation we set out the theoretical basis for constructing accurate models of the induction energy of clusters of moderately sized organic molecules. In this paper we use these techniques to develop a variety of accurate distributed polarizability models for a set of representative molecules that include formamide, N-methyl propanamide, benzene, and 3-azabicyclo[3.3.1]nonane-2,4-dione. We have also explored damping, penetration, and basis set effects. In particular, we have provided a way to treat the damping of the induction expansion. Different approximations to the induction energy are evaluated against accurate SAPT(DFT) energies, and we demonstrate the accuracy of our induction models on the formamide-water dimer.
Reliable and accurate extraction of Hamaker constants from surface force measurements.
Miklavcic, S J
2018-08-15
A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.
1995-06-08
A rugged, highly accurate, low-temperature sensor is developed by NASA researchers. A new sensor allows accurate, quick low-temperature measurements in rugged environments. This is especially useful in piping with very cold liquids under high pressure, and high flow rate conditions.
Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study
NASA Astrophysics Data System (ADS)
Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash
2018-02-01
Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.
Anaka, Matthew; Freyer, Claudia; Gedye, Craig; Caballero, Otavia; Davis, Ian D; Behren, Andreas; Cebon, Jonathan
2012-02-01
The ability of cell lines to accurately represent cancer is a major concern in preclinical research. Culture of glioma cells as neurospheres in stem cell media (SCM) has been shown to better represent the genotype and phenotype of primary glioblastoma in comparison to serum cell lines. Despite the use of neurosphere-like models of many malignancies, there has been no robust analysis of whether other cancers benefit from a more representative phenotype and genotype when cultured in SCM. We analyzed the growth properties, transcriptional profile, and genotype of melanoma cells grown de novo in SCM, as while melanocytes share a common precursor with neural cells, melanoma frequently demonstrates divergent behavior in cancer stem cell assays. SCM culture of melanoma cells induced a neural lineage gene expression profile that was not representative of matched patient tissue samples and which could be induced in serum cell lines by switching them into SCM. There was no enrichment for expression of putative melanoma stem cell markers, but the SCM expression profile did overlap significantly with that of SCM cultures of glioma, suggesting that the observed phenotype is media-specific rather than melanoma-specific. Xenografts derived from either culture condition provided the best representation of melanoma in situ. Finally, SCM culture of melanoma did not prevent ongoing acquisition of DNA copy number abnormalities. In conclusion, SCM culture of melanoma does not provide a better representation of the phenotype or genotype of metastatic melanoma, and the resulting neural bias could potentially confound therapeutic target identification. Copyright © 2011 AlphaMed Press.
NASA Astrophysics Data System (ADS)
Alfieri, Joseph G.; Kustas, William P.; Prueger, John H.; Hipps, Lawrence E.; Evett, Steven R.; Basara, Jeffrey B.; Neale, Christopher M. U.; French, Andrew N.; Colaizzi, Paul; Agam, Nurit; Cosh, Michael H.; Chavez, José L.; Howell, Terry A.
2012-12-01
Discrepancies can arise among surface flux measurements collected using disparate techniques due to differences in both the instrumentation and theoretical underpinnings of the different measurement methods. Using data collected primarily within a pair of irrigated cotton fields as a part of the 2008 Bushland Evapotranspiration and Remote Sensing Experiment (BEAREX08), flux measurements collected with two commonly-used methods, eddy covariance (EC) and lysimetry (LY), were compared and substantial differences were found. Daytime mean differences in the flux measurements from the two techniques could be in excess of 200 W m-2 under strongly advective conditions. Three causes for this disparity were found: (i) the failure of the eddy covariance systems to fully balance the surface energy budget, (ii) flux divergence due to the local advection of warm, dry air over the irrigated cotton fields, and (iii) the failure of lysimeters to accurately represent the surface properties of the cotton fields as a whole. Regardless of the underlying cause, the discrepancy among the flux measurements underscores the difficulty in collecting these measurements under strongly advective conditions. It also raises awareness of the uncertainty associated with in situ micrometeorological measurements and the need for caution when using such data for model validation or as observational evidence to definitively support or refute scientific hypotheses.
LIBS analysis of artificial calcified tissues matrices.
Kasem, M A; Gonzalez, J J; Russo, R E; Harith, M A
2013-04-15
In most laser-based analytical methods, the reproducibility of quantitative measurements strongly depends on maintaining uniform and stable experimental conditions. For LIBS analysis this means that for accurate estimation of elemental concentration, using the calibration curves obtained from reference samples, the plasma parameters have to be kept as constant as possible. In addition, calcified tissues such as bone are normally less "tough" in their texture than many samples, especially metals. Thus, the ablation process could change the sample morphological features rapidly, and result in poor reproducibility statistics. In the present work, three artificial reference sample sets have been fabricated. These samples represent three different calcium based matrices, CaCO3 matrix, bone ash matrix and Ca hydroxyapatite matrix. A comparative study of UV (266 nm) and IR (1064 nm) LIBS for these three sets of samples has been performed under similar experimental conditions for the two systems (laser energy, spot size, repetition rate, irradiance, etc.) to examine the wavelength effect. The analytical results demonstrated that UV-LIBS has improved reproducibility, precision, stable plasma conditions, better linear fitting, and the reduction of matrix effects. Bone ash could be used as a suitable standard reference material for calcified tissue calibration using LIBS with a 266 nm excitation wavelength. Copyright © 2013 Elsevier B.V. All rights reserved.
Church, Peter E.; Granato, Gregory E.; Owens, David W.
1999-01-01
Accurate and representative precipitation and stormwater-flow data are crucial for use of highway- or urban-runoff study results, either individually or in a regional or national synthesis of stormwater-runoff data. Equally important is information on the level of accuracy and representativeness of this precipitation and stormwaterflow data. Accurate and representative measurements of precipitation and stormwater flow, however, are difficult to obtain because of the rapidly changing spatial and temporal distribution of precipitation and flows during a storm. Many hydrologic and hydraulic factors must be considered in performing the following: selecting sites for measuring precipitation and stormwater flow that will provide data that adequately meet the objectives and goals of the study, determining frequencies and durations of data collection to fully characterize the storm and the rapidly changing stormwater flows, and selecting methods that will yield accurate data over the full range of both rainfall intensities and stormwater flows. To ensure that the accuracy and representativeness of precipitation and stormwater-flow data can be evaluated, decisions as to (1) where in the drainage system precipitation and stormwater flows are measured, (2) how frequently precipitation and stormwater flows are measured, (3) what methods are used to measure precipitation and stormwater flows, and (4) on what basis are these decisions made, must all be documented and communicated in an accessible format, such as a project description report, a data report or an appendix to a technical report, and (or) archived in a State or national records center. A quality assurance/quality control program must be established to ensure that this information is documented and reported, and that decisions made in the design phase of a study are continually reviewed, internally and externally, throughout the study. Without the supporting data needed to evaluate the accuracy and representativeness of the precipitation and stormwater-flow measurements, the data collected and interpretations made may have little meaning.
NASA Astrophysics Data System (ADS)
Cortinez, J. M.; Valocchi, A. J.; Herrera, P. A.
2013-12-01
Because of the finite size of numerical grids, it is very difficult to correctly account for processes that occur at different spatial scales to accurately simulate the migration of conservative and reactive compounds dissolved in groundwater. In one hand, transport processes in heterogeneous porous media are controlled by local-scale dispersion associated to transport processes at the pore-scale. On the other hand, variations of velocity at the continuum- or Darcy-scale produce spreading of the contaminant plume, which is referred to as macro-dispersion. Furthermore, under some conditions both effects interact, so that spreading may enhance the action of local-scale dispersion resulting in higher mixing, dilution and reaction rates. Traditionally, transport processes at different spatial scales have been included in numerical simulations by using a single dispersion coefficient. This approach implicitly assumes that the separate effects of local-dispersion and macro-dispersion can be added and represented by a unique effective dispersion coefficient. Moreover, the selection of the effective dispersion coefficient for numerical simulations usually do not consider the filtering effect of the grid size over the small-scale flow features. We have developed a multi-scale Lagragian numerical method that allows using two different dispersion coefficients to represent local- and macro-scale dispersion. This technique considers fluid particles that carry solute mass and whose locations evolve according to a deterministic component given by the grid-scale velocity and a stochastic component that corresponds to a block-effective macro-dispersion coefficient. Mass transfer between particles due to local-scale dispersion is approximated by a meshless method. We use our model to test under which transport conditions the combined effect of local- and macro-dispersion are additive and can be represented by a single effective dispersion coefficient. We also demonstrate that for the situations where both processes are additive, an effective grid-dependent dispersion coefficient can be derived based on the concept of block-effective dispersion. We show that the proposed effective dispersion coefficient is able to reproduce dilution, mixing and reaction rates for a wide range of transport conditions similar to the ones found in many practical applications.
Sandberg, W S; Carlos, R; Sandberg, E H; Roizen, M F
1997-10-01
To assess the influence of pharmaceutical advertising (in the form of books) directed at medical students and also to examine students' attitudes toward pharmaceutical representatives after interacting with them. Two groups of fourth-year medical students were surveyed: 166 residency applicants to the Department of Anesthesia and Critical Care between 1991 and 1993, who were questioned during their personal interviews with the department chair, and 39 fourth-year students from the University of Chicago Pritzker School of Medicine in 1994-95, who were surveyed by telephone. The students were asked if they had ever received a book from a pharmaceutical representative and, if so, to name the book. Then they were asked to name the book-giving company or a product associated with the company. Responses were compared using chi-square analysis. In all, 90% of the students had received one or more books and accurately recalled titles for 89% of them. However, only 25% of the named books were accurately associated with a pharmaceutical company or product. The Pritzker students, asked to recall interactions with pharmaceutical representatives, reported being skeptical of representatives who ignored them because they were students, but they rated as helpful and informative those who conversed with them or gave them gifts. Although gifts to medical students do not necessarily engender company or product recall, attention paid to medical students by pharmaceutical representatives engenders goodwill toward the representatives and their messages.
Lamb, Juliet S.; O'Reilly, Kathleen M.; Jodice, Patrick G.R.
2016-01-01
The effects of acute environmental stressors on reproduction in wildlife are often difficult to measure because of the labour and disturbance involved in collecting accurate reproductive data. Stress hormones represent a promising option for assessing the effects of environmental perturbations on altricial young; however, it is necessary first to establish how stress levels are affected by environmental conditions during development and whether elevated stress results in reduced survival and recruitment rates. In birds, the stress hormone corticosterone is deposited in feathers during the entire period of feather growth, making it an integrated measure of background stress levels during development. We tested the utility of feather corticosterone levels in 3- to 4-week-old nestling brown pelicans (Pelecanus occidentalis) for predicting survival rates at both the individual and colony levels. We also assessed the relationship of feather corticosterone to nestling body condition and rates of energy delivery to nestlings. Chicks with higher body condition and lower corticosterone levels were more likely to fledge and to be resighted after fledging, whereas those with lower body condition and higher corticosterone levels were less likely to fledge or be resighted after fledging. Feather corticosterone was also associated with intracolony differences in survival between ground and elevated nest sites. Colony-wide, mean feather corticosterone predicted nest productivity, chick survival and post-fledging dispersal more effectively than did body condition, although these relationships were strongest before fledglings dispersed away from the colony. Both reproductive success and nestling corticosterone were strongly related to nutritional conditions, particularly meal delivery rates. We conclude that feather corticosterone is a powerful predictor of reproductive success and could provide a useful metric for rapidly assessing the effects of changes in environmental conditions, provided pre-existing baseline variation is monitored and understood.
Lamb, Juliet S; O'Reilly, Kathleen M; Jodice, Patrick G R
2016-01-01
The effects of acute environmental stressors on reproduction in wildlife are often difficult to measure because of the labour and disturbance involved in collecting accurate reproductive data. Stress hormones represent a promising option for assessing the effects of environmental perturbations on altricial young; however, it is necessary first to establish how stress levels are affected by environmental conditions during development and whether elevated stress results in reduced survival and recruitment rates. In birds, the stress hormone corticosterone is deposited in feathers during the entire period of feather growth, making it an integrated measure of background stress levels during development. We tested the utility of feather corticosterone levels in 3- to 4-week-old nestling brown pelicans ( Pelecanus occidentalis ) for predicting survival rates at both the individual and colony levels. We also assessed the relationship of feather corticosterone to nestling body condition and rates of energy delivery to nestlings. Chicks with higher body condition and lower corticosterone levels were more likely to fledge and to be resighted after fledging, whereas those with lower body condition and higher corticosterone levels were less likely to fledge or be resighted after fledging. Feather corticosterone was also associated with intracolony differences in survival between ground and elevated nest sites. Colony-wide, mean feather corticosterone predicted nest productivity, chick survival and post-fledging dispersal more effectively than did body condition, although these relationships were strongest before fledglings dispersed away from the colony. Both reproductive success and nestling corticosterone were strongly related to nutritional conditions, particularly meal delivery rates. We conclude that feather corticosterone is a powerful predictor of reproductive success and could provide a useful metric for rapidly assessing the effects of changes in environmental conditions, provided pre-existing baseline variation is monitored and understood.
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
Representativeness uncertainty in chemical data assimilation highlight mixing barriers
NASA Astrophysics Data System (ADS)
Lary, David John
2004-02-01
When performing chemical data assimilation the observational, representativeness, and theoretical uncertainties have very different characteristics. In this study, we have accurately characterized the representativeness uncertainty by studying the probability distribution function (PDF) of the observations. The average deviation has been used as a measure of the width of the PDF and of the variability (representativeness uncertainty) for the grid cell. It turns out that for long-lived tracers such as N2O and CH4 the representativeness uncertainty is markedly different from the observational uncertainty and clearly delineates mixing barriers such as the polar vortex edge, the tropical pipe and the tropopause. Published by Elsevier Ltd on behalf of Royal Meteorological Society.
76 FR 65221 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
... (formerly, ``Pink Sheets''), operated by OTC Markets Group Inc. (``OTC Link''). According to representatives...-dealers would have been unable to accurately determine the market depth of, and demand for, securities in...
Sara A. Goeking; Paul L. Patterson
2015-01-01
Users of Forest Inventory and Analysis (FIA) data sometimes compare historic and current forest inventory estimates, despite warnings that such comparisons may be tenuous. The purpose of this study was to demonstrate a method for obtaining a more accurate and representative reference dataset using data collected at co-located plots (i.e., plots that were measured...
Comparison of OH Reactivity Instruments in the Atmosphere Simulation Chamber SAPHIR.
NASA Astrophysics Data System (ADS)
Fuchs, H.; Novelli, A.; Rolletter, M.; Hofzumahaus, A.; Pfannerstill, E.; Edtbauer, A.; Kessel, S.; Williams, J.; Michoud, V.; Dusanter, S.; Locoge, N.; Zannoni, N.; Gros, V.; Truong, F.; Sarda Esteve, R.; Cryer, D. R.; Brumby, C.; Whalley, L.; Stone, D. J.; Seakins, P. W.; Heard, D. E.; Schoemaecker, C.; Blocquet, M.; Fittschen, C. M.; Thames, A. B.; Coudert, S.; Brune, W. H.; Batut, S.; Tatum Ernest, C.; Harder, H.; Elste, T.; Bohn, B.; Hohaus, T.; Holland, F.; Muller, J. B. A.; Li, X.; Rohrer, F.; Kubistin, D.; Kiendler-Scharr, A.; Tillmann, R.; Andres, S.; Wegener, R.; Yu, Z.; Zou, Q.; Wahner, A.
2017-12-01
Two campaigns were conducted performing experiments in the atmospheric simulation chamber SAPHIR at Forschungszentrum Jülich in October 2015 and April 2016 to compare hydroxyl (OH) radical reactivity (kOH) measurements. Chemical conditions were chosen either to be representative of the atmosphere or to test potential limitations of instruments. The results of these campaigns demonstrate that OH reactivity can be accurately measured for a wide range of atmospherically relevant chemical conditions (e.g. water vapor, nitrogen oxides, various organic compounds) by all instruments. The precision of the measurements is higher for instruments directly detecting hydroxyl radicals (OH), whereas the indirect Comparative Reactivity Method (CRM) has a higher limit of detection of 2s-1 at a time resolution of 10 to 15 min. The performances of the instruments were systematically tested by stepwise increasing, for example, the concentrations of carbon monoxide (CO), water vapor or nitric oxide (NO). In further experiments, mixtures of organic reactants were injected in the chamber to simulate urban and forested environments. Overall, the results show that instruments are capable of measuring OH reactivity in the presence of CO, alkanes, alkenes and aromatic compounds. The transmission efficiency in Teflon inlet lines could have introduced systematic errors in measurements for low-volatile organic compounds in some instruments. CRM instruments exhibited a larger scatter in the data compared to the other instruments. The largest differences to the reference were observed by CRM instruments in the presence of terpenes and oxygenated organic compounds. In some of these experiments, only a small fraction of the reactivity is detected. The accuracy of CRM measurements is most likely limited by the corrections that need to be applied in order to account for known effects of, for example, deviations from pseudo-first order conditions, nitrogen oxides or water vapor on the measurement. Methods to derive these corrections vary among the different CRM instruments. Measurements by a flow-tube instrument combined with the direct detection of OH by chemical ionization mass spectrometry (CIMS) show limitations, but were accurate for low reactivity (< 15s -1) and low NO (< 5 ppbv) conditions.
2012-01-01
Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption). In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA). The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA), as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions. The ssLNA provides a simple and accurate means of performing stochastic model reduction and hence it is expected to be of widespread utility in studying the dynamics of large noisy reaction networks, as is common in computational and systems biology. PMID:22583770
NASA Astrophysics Data System (ADS)
Zheng, N.
2017-12-01
Sensible heat flux (H) is one of the driving factors of surface turbulent motion and energy exchange. Therefore, it is particularly important to measure sensible heat flux accurately at the regional scale. However, due to the heterogeneity of the underlying surface, hydrothermal regime, and different weather conditions, it is difficult to estimate the represented flux at the kilometer scale. The scintillometer have been developed into an effective and universal equipment for deriving heat flux at the regional-scale which based on the turbulence effect of light in the atmosphere since the 1980s. The parameter directly obtained by the scintillometer is the structure parameter of the refractive index of air based on the changes of light intensity fluctuation. Combine with parameters such as temperature structure parameter, zero-plane displacement, surface roughness, wind velocity, air temperature and the other meteorological data heat fluxes can be derived. These additional parameters increase the uncertainties of flux because the difference between the actual feature of turbulent motion and the applicable conditions of turbulence theory. Most previous studies often focused on the constant flux layers that are above the rough sub-layers and homogeneous flat surfaces underlying surfaces with suitable weather conditions. Therefore, the criteria and modified forms of key parameters are invariable. In this study, we conduct investment over the hilly area of northern China with different plants, such as cork oak, cedar-black and locust. On the basis of key research on the threshold and modified forms of saturation with different turbulence intensity, modified forms of Bowen ratio with different drying-and-wetting conditions, universal function for the temperature structure parameter under different atmospheric stability, the dominant sources of uncertainty will be determined. The above study is significant to reveal influence mechanism of uncertainty and explore influence degree of uncertainty with quantitative analysis. The study can provide theoretical basis and technical support for accurately measuring sensible heat fluxes of forest ecosystem with scintillometer method, and can also provide work foundation for further study on role of forest ecosystem in energy balance and climate change.
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
2005-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
1999-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25% of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust-drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
The use of messages in altering risky gambling behavior in experienced gamblers.
Jardin, Bianca F; Wulfert, Edelgard
2012-03-01
The present study was an experimental analogue that examined the relationship between gambling-related irrational beliefs and risky gambling behavior. Eighty high-frequency gamblers were randomly assigned to four conditions and played a chance-based computer game in a laboratory setting. Depending on the condition, during the game a pop-up screen repeatedly displayed either accurate or inaccurate messages concerning the game, neutral messages, or no messages. Consistent with a cognitive-behavioral model of gambling, accurate messages that correctly described the random contingencies governing the game decreased risky gambling behavior. Contrary to predictions, inaccurate messages designed to mimic gamblers' irrational beliefs about their abilities to influence chance events did not lead to more risky gambling behavior than exposure to neutral or no messages. Participants in the latter three conditions did not differ significantly from one another and all showed riskier gambling behavior than participants in the accurate message condition. The results suggest that harm minimization strategies that help individuals maintain a rational perspective while gambling may protect them from unreasonable risk-taking. PsycINFO Database Record (c) 2012 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Sacuto, S.; Jorissen, A.; Cruzalèbes, P.; Pasquato, E.; Chiavassa, A.; Spang, A.; Rabbia, Y.; Chesneau, O.
2011-09-01
A monitoring of surface brightness asymmetries in evolved giants and supergiants is necessary to estimate the threat that they represent to accurate Gaia parallaxes. Closure-phase measurements obtained with AMBER/VISA in a 3-telescope configuration are fitted by a simple model to constrain the photocenter displacement. The results for the C-type star TX Psc show a large deviation of the photocenter displacement that could bias the Gaia parallax.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiskoot, R.J.J.
Accurate and reliable sampling systems are imperative when confirming natural gas' commercial value. Buyers and sellers need accurate hydrocarbon-composition information to conduct fair sale transactions. Because of poor sample extraction, preparation or analysis can invalidate the sale, more attention should be directed toward improving representative sampling. Consider all sampling components, i.e., gas types, line pressure and temperature, equipment maintenance and service needs, etc. The paper discusses gas sampling, design considerations (location, probe type, extraction devices, controller, and receivers), operating requirements, and system integration.
Mechanical Chevrons and Fluidics for Advanced Military Aircraft Noise Reduction
2011-03-01
at or near the nozzle lip. Therefore, for the problem at hand, the simulations will need to accurately capture shock waves , unsteady large-scale...simulations could accurately capture the flow field and near-field noise from representative jet engine nozzles and indeed this was a go/no-go...mixing noise. The first two types of noise are related to the shock waves that are present in the high-speed jet flow. While the mixing noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Bonin, Timothy
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing considermore » uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict errors in lidar-measured wind speed. The results show how uncertainty varies over time and can be used to help select data with different levels of uncertainty for different applications, for example, low uncertainty data for power performance testing versus all data for plant performance monitoring.« less
Wetlands inform how climate extremes influence surface water expansion and contraction
Vanderhoof, Melanie; Lane, Charles R.; McManus, Michael L.; Alexander, Laurie C.; Christensen, Jay R.
2018-01-01
Effective monitoring and prediction of flood and drought events requires an improved understanding of how and why surface water expansion and contraction in response to climate varies across space. This paper sought to (1) quantify how interannual patterns of surface water expansion and contraction vary spatially across the Prairie Pothole Region (PPR) and adjacent Northern Prairie (NP) in the United States, and (2) explore how landscape characteristics influence the relationship between climate inputs and surface water dynamics. Due to differences in glacial history, the PPR and NP show distinct patterns in regards to drainage development and wetland density, together providing a diversity of conditions to examine surface water dynamics. We used Landsat imagery to characterize variability in surface water extent across 11 Landsat path/rows representing the PPR and NP (images spanned 1985–2015). The PPR not only experienced a 2.6-fold greater surface water extent under median conditions relative to the NP, but also showed a 3.4-fold greater change in surface water extent between drought and deluge conditions. The relationship between surface water extent and accumulated water availability (precipitation minus potential evapotranspiration) was quantified per watershed and statistically related to variables representing hydrology-related landscape characteristics (e.g., infiltration capacity, surface storage capacity, stream density). To investigate the influence stream connectivity has on the rate at which surface water leaves a given location, we modeled stream-connected and stream-disconnected surface water separately. Stream-connected surface water showed a greater expansion with wetter climatic conditions in landscapes with greater total wetland area, but lower total wetland density. Disconnected surface water showed a greater expansion with wetter climatic conditions in landscapes with higher wetland density, lower infiltration and less anthropogenic drainage. From these findings, we can expect that shifts in precipitation and evaporative demand will have uneven effects on surface water quantity. Accurate predictions regarding the effect of climate change on surface water quantity will require consideration of hydrology-related landscape characteristics including wetland storage and arrangement.
Randolph, R.B.; Krause, R.E.
1984-01-01
A two-dimensional finite-difference model of the principal artesian aquifer in the Savannah, Georgia, area, originally developed by Counts and Krause (1976), has been expanded and refined. The model was updated and the grid redesigned to provide more current and accurate detail for ground-water resources management alternatives. Improvements in the definition of the flow system were made possible by the acquisition of additional data in the area and by recently completed regional models that include the area. The model was initially calibrated by using the estimated predevelopment potentiometric surface of 1880. The flow system under predevelopment conditions was sluggish and only 100 cubic feet per second (65 million gallons per day) flowed through the model area. It was then tested for acceptance by using the May 1980 potentiometric surface and corresponding pumping stress of approximately 85 million gallons per day in the Savannah, Georgia-Hilton Head Island, South Carolina, area. The flow through the system under 1980 conditions was about 390 cubic feet per second (250 million gallons per day) and the vertical inflow from the overlying surficial aquifer more than doubled due to formerly rejected recharge that now flows vertically into the aquifer. Calibration was accurate + or - 10 feet. The absolute error per node was 3.4 feet. A hypothetical 25-percent increase in pumpage over the entire area was used to represent a gradual growth in commercial and municipal pumpage over the next 20 to 30 years. The increase produced a maximum decline of 30 feet below the existing water level of 135 feet below sea level at the center of the cone of depression in Savannah, and a 5-foot decline at a radius of 20 miles from the center of the cone of depression. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursin, M.; Koeberl, O.; Perret, G.
2012-07-01
High Conversion Light Water Reactors (HCLWR) allows a better usage of fuel resources thanks to a higher breeding ratio than standard LWR. Their uses together with the current fleet of LWR constitute a fuel cycle thoroughly studied in Japan and the US today. However, one of the issues related to HCLWR is their void reactivity coefficient (VRC), which can be positive. Accurate predictions of void reactivity coefficient in HCLWR conditions and their comparisons with representative experiments are therefore required. In this paper an inter comparison of modern codes and cross-section libraries is performed for a former Benchmark on Void Reactivitymore » Effect in PWRs conducted by the OECD/NEA. It shows an overview of the k-inf values and their associated VRC obtained for infinite lattice calculations with UO{sub 2} and highly enriched MOX fuel cells. The codes MCNPX2.5, TRIPOLI4.4 and CASMO-5 in conjunction with the libraries ENDF/B-VI.8, -VII.0, JEF-2.2 and JEFF-3.1 are used. A non-negligible spread of results for voided conditions is found for the high content MOX fuel. The spread of eigenvalues for the moderated and voided UO{sub 2} fuel are about 200 pcm and 700 pcm, respectively. The standard deviation for the VRCs for the UO{sub 2} fuel is about 0.7% while the one for the MOX fuel is about 13%. This work shows that an appropriate treatment of the unresolved resonance energy range is an important issue for the accurate determination of the void reactivity effect for HCLWR. A comparison to experimental results is needed to resolve the presented discrepancies. (authors)« less
Systematic XAS study on the reduction and uptake of Tc by magnetite and mackinawite.
Yalçıntaş, Ezgi; Scheinost, Andreas C; Gaona, Xavier; Altmaier, Marcus
2016-11-28
The mechanisms for the reduction and uptake of Tc by magnetite (Fe 3 O 4 ) and mackinawite (FeS) are investigated using X-ray absorption spectroscopy (XANES and EXAFS), in combination with thermodynamic calculations of the Tc/Fe systems and accurate characterization of the solution properties (pH m , pe, [Tc]). Batch sorption experiments were performed under strictly anoxic conditions using freshly prepared magnetite and mackinawite in 0.1 M NaCl solutions with varying initial Tc(vii) concentrations (2 × 10 -5 and 2 × 10 -4 M) and Tc loadings (400-900 ppm). XANES confirms the complete reduction of Tc(vii) to Tc(iv) in all investigated systems, as predicted from experimental (pH m + pe) measurements and thermodynamic calculations. Two Tc endmember species are identified by EXAFS in the magnetite system, Tc substituting for Fe in the magnetite structure and Tc-Tc dimers sorbed to the magnetite {111} faces through a triple bond. The sorption endmember is favoured at higher [Tc], whereas incorporation prevails at low [Tc] and less alkaline pH conditions. The key role of pH in the uptake mechanism is interpreted in terms of magnetite solubility, with higher [Fe] and greater recrystallization rates occurring at lower pH values. A TcS x -like phase is predominant in all investigated mackinawite systems, although the contribution of up to 20% of TcO 2 ·xH 2 O(s) (likely as surface precipitate) is observed for the highest investigated loadings (900 ppm). These results provide key inputs for an accurate mechanistic interpretation of the Tc uptake by magnetite and mackinawite, so far controversially discussed in the literature, and represent a highly relevant contribution to the investigation of Tc retention processes in the context of nuclear waste disposal.
Model-based framework for multi-axial real-time hybrid simulation testing
NASA Astrophysics Data System (ADS)
Fermandois, Gaston A.; Spencer, Billie F.
2017-10-01
Real-time hybrid simulation is an efficient and cost-effective dynamic testing technique for performance evaluation of structural systems subjected to earthquake loading with rate-dependent behavior. A loading assembly with multiple actuators is required to impose realistic boundary conditions on physical specimens. However, such a testing system is expected to exhibit significant dynamic coupling of the actuators and suffer from time lags that are associated with the dynamics of the servo-hydraulic system, as well as control-structure interaction (CSI). One approach to reducing experimental errors considers a multi-input, multi-output (MIMO) controller design, yielding accurate reference tracking and noise rejection. In this paper, a framework for multi-axial real-time hybrid simulation (maRTHS) testing is presented. The methodology employs a real-time feedback-feedforward controller for multiple actuators commanded in Cartesian coordinates. Kinematic transformations between actuator space and Cartesian space are derived for all six-degrees-offreedom of the moving platform. Then, a frequency domain identification technique is used to develop an accurate MIMO transfer function of the system. Further, a Cartesian-domain model-based feedforward-feedback controller is implemented for time lag compensation and to increase the robustness of the reference tracking for given model uncertainty. The framework is implemented using the 1/5th-scale Load and Boundary Condition Box (LBCB) located at the University of Illinois at Urbana- Champaign. To demonstrate the efficacy of the proposed methodology, a single-story frame subjected to earthquake loading is tested. One of the columns in the frame is represented physically in the laboratory as a cantilevered steel column. For realtime execution, the numerical substructure, kinematic transformations, and controllers are implemented on a digital signal processor. Results show excellent performance of the maRTHS framework when six-degrees-of-freedom are controlled at the interface between substructures.
Application of neuro-fuzzy methods to gamma spectroscopy
NASA Astrophysics Data System (ADS)
Grelle, Austin L.
Nuclear non-proliferation activities are an essential part of national security activities both domestic and abroad. The safety of the public in densely populated environments such as urban areas or large events can be compromised if devices using special nuclear materials are present. Therefore, the prompt and accurate detection of these materials is an important topic of research, in which the identification of normal conditions is also of importance. With gamma-ray spectroscopy, these conditions are identified as the radiation background, which though being affected by a multitude of factors is ever present. Therefore, in nuclear non-proliferation activities the accurate identification of background is important. With this in mind, a method has been developed to utilize aggregate background data to predict the background of a location through the use of an Artificial Neural Network (ANN). After being trained on background data, the ANN is presented with nearby relevant gamma-ray spectroscopy data---as identified by a Fuzzy Inference System - to create a predicted background spectra to compare to a measured spectra. If a significant deviation exists between the predicted and measured data, the method alerts the user such that a more thorough investigation can take place. Research herein focused on data from an urban setting in which the number of false positives was observed to be 28 out of a total of 987, representing 2.94% error. The method therefore currently shows a high rate of false positives given the current configuration, however there are promising steps that can be taken to further minimize this error. With this in mind, the method stands as a potentially significant tool in urban nuclear nonproliferation activities.
Distributed adaptive diagnosis of sensor faults using structural response data
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-10-01
The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.
Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium
NASA Astrophysics Data System (ADS)
Zhang, Mingkan; Zhang, Ye
2015-03-01
A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Woolf, D.; Lehmann, J.
2016-12-01
The exchange of carbon between soils and the atmosphere represents an important uncertainty in climate predictions. Current Earth system models apply soil organic matter (SOM) models based on independent carbon pools with 1st order decomposition dynamics. It has been widely argued over the last decade that such models do not accurately describe soil processes and mechanisms. For example, the long term persistence of soil organic carbon (SOC) is only adequately described by such models by the post hoc assumption of passive or inert carbon pools. Further, such 1st order models also fail to account for microbially-mediated dynamics such as priming interactions. These shortcomings may limit their applicability to long term predictions under conditions of global environmental change. In addition to incorporating recent conceptual advances in the mechanisms of SOM decomposition and protection, next-generation SOM models intended for use in Earth system models need to meet further quality criteria. Namely, that they should (a) accurately describe historical data from long term trials and the current global distribution of soil organic carbon, (b) be computationally efficient for large number of iterations involved in climate modeling, and (c) have sufficiently simple parameterization that they can be run on spatially-explicit data available at global scale under varying conditions of global change over long time scales. Here we show that linking fundamental ecological principles and microbial population dynamics to SOC turnover rates results in a dynamic model that meets all of these quality criteria. This approach simultaneously eliminates the need to postulate biogeochemically-implausible passive or inert pools, instead showing how SOM persistence emerges from ecological principles, while also reproducing observed priming interactions.
Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.
2012-01-01
This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Harper, Annie; Rowe, Michael
2017-01-01
The Social Security Administration (SSA) recently completed an evaluation of the process by which representative payees are assigned. The SSA report is welcome, particularly for its focus on developing more accurate, real-world assessments of a person's financial capability and its recognition of the need for more flexible options for people with disabilities. Crucially, the report discusses the impact of the broader environment-specifically, conditions related to living in poverty. However, it provides no guidance about environmental interventions that could enable more beneficiaries to manage their funds without a payee. Innovative financial products could be offered to beneficiaries, and the retail industry could develop processes to support responsible financial management by people with mental illness. Changes to SSA benefits systems, including raising benefits levels and asset limits, could enable more beneficiaries to manage their funds independently.
Clinical decision-making by midwives: managing case complexity.
Cioffi, J; Markham, R
1997-02-01
In making clinical judgements, it is argued that midwives use 'shortcuts' or heuristics based on estimated probabilities to simplify the decision-making task. Midwives (n = 30) were given simulated patient assessment situations of high and low complexity and were required to think aloud. Analysis of verbal protocols showed that subjective probability judgements (heuristics) were used more frequently in the high than low complexity case and predominated in the last quarter of the assessment period for the high complexity case. 'Representativeness' was identified more frequently in the high than in the low case, but was the dominant heuristic in both. Reports completed after each simulation suggest that heuristics based on memory for particular conditions affect decisions. It is concluded that midwives use heuristics, derived mainly from their clinical experiences, in an attempt to save cognitive effort and to facilitate reasonably accurate decisions in the decision-making process.
Bearing-Load Modeling and Analysis Study for Mechanically Connected Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2006-01-01
Bearing-load response for a pin-loaded hole is studied within the context of two-dimensional finite element analyses. Pin-loaded-hole configurations are representative of mechanically connected structures, such as a stiffener fastened to a rib of an isogrid panel, that are idealized as part of a larger structural component. Within this context, the larger structural component may be idealized as a two-dimensional shell finite element model to identify load paths and high stress regions. Finite element modeling and analysis aspects of a pin-loaded hole are considered in the present paper including the use of linear and nonlinear springs to simulate the pin-bearing contact condition. Simulating pin-connected structures within a two-dimensional finite element analysis model using nonlinear spring or gap elements provides an effective way for accurate prediction of the local effective stress state and peak forces.
Microfluidic Organ/Body-on-a-Chip Devices at the Convergence of Biology and Microengineering
Perestrelo, Ana Rubina; Águas, Ana C. P.; Rainer, Alberto; Forte, Giancarlo
2015-01-01
Recent advances in biomedical technologies are mostly related to the convergence of biology with microengineering. For instance, microfluidic devices are now commonly found in most research centers, clinics and hospitals, contributing to more accurate studies and therapies as powerful tools for drug delivery, monitoring of specific analytes, and medical diagnostics. Most remarkably, integration of cellularized constructs within microengineered platforms has enabled the recapitulation of the physiological and pathological conditions of complex tissues and organs. The so-called “organ-on-a-chip” technology, which represents a new avenue in the field of advanced in vitro models, with the potential to revolutionize current approaches to drug screening and toxicology studies. This review aims to highlight recent advances of microfluidic-based devices towards a body-on-a-chip concept, exploring their technology and broad applications in the biomedical field. PMID:26690442
Photodermatoses in skin of color.
Gutierrez, Daniel; Gaulding, Jewell V; Beltran, Adriana F Motta; Lim, Henry W; Pritchett, Ellen N
2018-06-10
Photodermatoses represent a heterogeneous collection of disorders unified by the characteristic of being provoked through exposure to ultraviolet radiation. Generally, these conditions are classified into the following categories: immunologically mediated photodermatoses, chemical- and drug-induced photosensitivity, photoaggravated dermatoses, and photosensitivity associated with defective DNA repair mechanisms or chromosomal instabilities. The list of photodermatoses is extensive and each individual photodermatosis is understood to a different extent. Regardless, there exists a paucity of information with regards to the clinical presentation among those with skin of color. With ever-changing global demographics, recognition of photosensitive disorders in a diverse population is essential for accurate diagnoses and therapeutic guidance. The scope of this article seeks to review the epidemiology and clinical variability in presentation of such photodermatoses in patients with skin of color. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Bilateral Glaucomatous Optic Neuropathy Caused by Eye Rubbing.
Savastano, Alfonso; Savastano, Maria Cristina; Carlomusto, Laura; Savastano, Silvio
2015-01-01
In this report, we describe a particular condition of a 52-year-old man who showed advanced bilateral glaucomatous-like optic disc damage, even though the intraocular pressure resulted normal during all examinations performed. Visual field test, steady-state pattern electroretinogram, retinal nerve fiber layer and retinal tomographic evaluations were performed to evaluate the optic disc damage. Over a 4-year observational period, his visual acuity decreased to 12/20 in the right eye and counting fingers in the left eye. Visual fields were severely compromised, and intraocular pressure values were not superior to 14 mm Hg during routine examinations. An accurate anamnesis and the suspicion of this disease represent a crucial aspect to establish the correct diagnosis. In fact, our patient strongly rubbed his eyes for more than 10 h per day. Recurrent and continuous eye rubbing can induce progressive optic neuropathy, causing severe visual field damage similar to the pathology of advanced glaucoma.
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
Numerical modeling and preliminary validation of drag-based vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Krysiński, Tomasz; Buliński, Zbigniew; Nowak, Andrzej J.
2015-03-01
The main purpose of this article is to verify and validate the mathematical description of the airflow around a wind turbine with vertical axis of rotation, which could be considered as representative for this type of devices. Mathematical modeling of the airflow around wind turbines in particular those with the vertical axis is a problematic matter due to the complex nature of this highly swirled flow. Moreover, it is turbulent flow accompanied by a rotation of the rotor and the dynamic boundary layer separation. In such conditions, the key aspects of the mathematical model are accurate turbulence description, definition of circular motion as well as accompanying effects like centrifugal force or the Coriolis force and parameters of spatial and temporal discretization. The paper presents the impact of the different simulation parameters on the obtained results of the wind turbine simulation. Analysed models have been validated against experimental data published in the literature.
Constraints on Generality (COG): A Proposed Addition to All Empirical Papers.
Simons, Daniel J; Shoda, Yuichi; Lindsay, D Stephen
2017-11-01
Psychological scientists draw inferences about populations based on samples-of people, situations, and stimuli-from those populations. Yet, few papers identify their target populations, and even fewer justify how or why the tested samples are representative of broader populations. A cumulative science depends on accurately characterizing the generality of findings, but current publishing standards do not require authors to constrain their inferences, leaving readers to assume the broadest possible generalizations. We propose that the discussion section of all primary research articles specify Constraints on Generality (i.e., a "COG" statement) that identify and justify target populations for the reported findings. Explicitly defining the target populations will help other researchers to sample from the same populations when conducting a direct replication, and it could encourage follow-up studies that test the boundary conditions of the original finding. Universal adoption of COG statements would change publishing incentives to favor a more cumulative science.
Decoding spike timing: the differential reverse correlation method
Tkačik, Gašper; Magnasco, Marcelo O.
2009-01-01
It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928
Scaling behaviour for the water transport in nanoconfined geometries
Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro; Decuzzi, Paolo
2014-01-01
The transport of water in nanoconfined geometries is different from bulk phase and has tremendous implications in nanotechnology and biotechnology. Here molecular dynamics is used to compute the self-diffusion coefficient D of water within nanopores, around nanoparticles, carbon nanotubes and proteins. For almost 60 different cases, D is found to scale linearly with the sole parameter θ as D(θ)=DB[1+(DC/DB−1)θ], with DB and DC the bulk and totally confined diffusion of water, respectively. The parameter θ is primarily influenced by geometry and represents the ratio between the confined and total water volumes. The D(θ) relationship is interpreted within the thermodynamics of supercooled water. As an example, such relationship is shown to accurately predict the relaxometric response of contrast agents for magnetic resonance imaging. The D(θ) relationship can help in interpreting the transport of water molecules under nanoconfined conditions and tailoring nanostructures with precise modulation of water mobility. PMID:24699509
Kim, Kyung Lock; Sung, Gihyun; Sim, Jaehwan; Murray, James; Li, Meng; Lee, Ara; Shrinidhi, Annadka; Park, Kyeng Min; Kim, Kimoon
2018-04-27
Here we report ultrastable synthetic binding pairs between cucurbit[7]uril (CB[7]) and adamantyl- (AdA) or ferrocenyl-ammonium (FcA) as a supramolecular latching system for protein imaging, overcoming the limitations of protein-based binding pairs. Cyanine 3-conjugated CB[7] (Cy3-CB[7]) can visualize AdA- or FcA-labeled proteins to provide clear fluorescence images for accurate and precise analysis of proteins. Furthermore, controllability of the system is demonstrated by treating with a stronger competitor guest. At low temperature, this allows us to selectively detach Cy3-CB[7] from guest-labeled proteins on the cell surface, while leaving Cy3-CB[7] latched to the cytosolic proteins for spatially conditional visualization of target proteins. This work represents a non-protein-based bioimaging tool which has inherent advantages over the widely used protein-based techniques, thereby demonstrating the great potential of this synthetic system.
Relative motion of orbiting satellites
NASA Technical Reports Server (NTRS)
Eades, J. B., Jr.
1972-01-01
The relative motion problem is analyzed, as a linearized case, and as a numerically determined solution to provide a time history of the geometries representing the motion state. The displacement history and the hodographs for families of solutions are provided, analytically and graphically, to serve as an aid to understanding this problem area. Linearized solutions to relative motion problems of orbiting particles are presented for the impulsive and fixed thrust cases. Second order solutions are described to enhance the accuracy of prediction. A method was developed to obtain accurate, numerical solutions to the intercept and rendezvous problem; and, special situations are examined. A particular problem related to relative motions, where the motion traces develop a cusp, is examined in detail. This phenomenon is found to be dependent on a particular relationship between orbital eccentricity and the inclination between orbital planes. These conditions are determined, and, example situations are presented and discussed.
Far-field analysis of coupled bulk and boundary layer diffusion toward an ion channel entrance.
Schumaker, M F; Kentler, C J
1998-01-01
We present a far-field analysis of ion diffusion toward a channel embedded in a membrane with a fixed charge density. The Smoluchowski equation, which represents the 3D problem, is approximated by a system of coupled three- and two-dimensional diffusions. The 2D diffusion models the quasi-two-dimensional diffusion of ions in a boundary layer in which the electrical potential interaction with the membrane surface charge is important. The 3D diffusion models ion transport in the bulk region outside the boundary layer. Analytical expressions for concentration and flux are developed that are accurate far from the channel entrance. These provide boundary conditions for a numerical solution of the problem. Our results are used to calculate far-field ion flows corresponding to experiments of Bell and Miller (Biophys. J. 45:279, 1984). PMID:9591651
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.
Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa
2017-06-05
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Lerch, Bradley A.; Saleeb, Atef F.; Kasemer, Matthew P.
2013-01-01
Time-dependent deformation and damage behavior can significantly affect the life of aerospace propulsion components. Consequently, one needs an accurate constitutive model that can represent both reversible and irreversible behavior under multiaxial loading conditions. This paper details the characterization and utilization of a multi-mechanism constitutive model of the GVIPS class (Generalized Viscoplastic with Potential Structure) that has been extended to describe the viscoelastoplastic deformation and damage of the titanium alloy Ti-6Al-4V. Associated material constants were characterized at five elevated temperatures where viscoelastoplastic behavior was observed, and at three elevated temperatures where damage (of both the stiffness reduction and strength reduction type) was incurred. Experimental data from a wide variety of uniaxial load cases were used to correlate and validate the proposed GVIPS model. Presented are the optimized material parameters, and the viscoelastoplastic deformation and damage responses at the various temperatures.
Nuclear Thermal Rocket Element Environmental Simulator (NTREES) Upgrade Activities
NASA Technical Reports Server (NTRS)
Emrich, William J. Jr.; Moran, Robert P.; Pearson, J. Boise
2012-01-01
To support the on-going nuclear thermal propulsion effort, a state-of-the-art non nuclear experimental test setup has been constructed to evaluate the performance characteristics of candidate fuel element materials and geometries in representative environments. The facility to perform this testing is referred to as the Nuclear Thermal Rocket Element Environment Simulator (NTREES). This device can simulate the environmental conditions (minus the radiation) to which nuclear rocket fuel components will be subjected during reactor operation. Test articles mounted in the simulator are inductively heated in such a manner so as to accurately reproduce the temperatures and heat fluxes which would normally occur as a result of nuclear fission and would be exposed to flowing hydrogen. Initial testing of a somewhat prototypical fuel element has been successfully performed in NTREES and the facility has now been shutdown to allow for an extensive reconfiguration of the facility which will result in a significant upgrade in its capabilities
Automatization of hydrodynamic modelling in a Floreon+ system
NASA Astrophysics Data System (ADS)
Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David
2017-07-01
The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099
2015-03-15
A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less
Coupled thermal-fluid analysis with flowpath-cavity interaction in a gas turbine engine
NASA Astrophysics Data System (ADS)
Fitzpatrick, John Nathan
This study seeks to improve the understanding of inlet conditions of a large rotor-stator cavity in a turbofan engine, often referred to as the drive cone cavity (DCC). The inlet flow is better understood through a higher fidelity computational fluid dynamics (CFD) modeling of the inlet to the cavity, and a coupled finite element (FE) thermal to CFD fluid analysis of the cavity in order to accurately predict engine component temperatures. Accurately predicting temperature distribution in the cavity is important because temperatures directly affect the material properties including Young's modulus, yield strength, fatigue strength, creep properties. All of these properties directly affect the life of critical engine components. In addition, temperatures cause thermal expansion which changes clearances and in turn affects engine efficiency. The DCC is fed from the last stage of the high pressure compressor. One of its primary functions is to purge the air over the rotor wall to prevent it from overheating. Aero-thermal conditions within the DCC cavity are particularly challenging to predict due to the complex air flow and high heat transfer in the rotating component. Thus, in order to accurately predict metal temperatures a two-way coupled CFD-FE analysis is needed. Historically, when the cavity airflow is modeled for engine design purposes, the inlet condition has been over-simplified for the CFD analysis which impacts the results, particularly in the region around the compressor disc rim. The inlet is typically simplified by circumferentially averaging the velocity field at the inlet to the cavity which removes the effect of pressure wakes from the upstream rotor blades. The way in which these non-axisymmetric flow characteristics affect metal temperatures is not well understood. In addition, a constant air temperature scaled from a previous analysis is used as the simplified cavity inlet air temperature. Therefore, the objectives of this study are: (a) model the DCC cavity with a more physically representative inlet condition while coupling the solid thermal analysis and compressible air flow analysis that includes the fluid velocity, pressure, and temperature fields; (b) run a coupled analysis whose boundary conditions come from computational models, rather than thermocouple data; (c) validate the model using available experimental data; and (d) based on the validation, determine if the model can be used to predict air inlet and metal temperatures for new engine geometries. Verification with experimental results showed that the coupled analysis with the 3D no-bolt CFD model with predictive boundary conditions, over-predicted the HP6 offtake temperature by 16k. The maximum error was an over-prediction of 50k while the average error was 17k. The predictive model with 3D bolts also predicted cavity temperatures with an average error of 17k. For the two CFD models with predicted boundary conditions, the case without bolts performed better than the case with bolts. This is due to the flow errors caused by placing stationary bolts in a rotating reference frame. Therefore it is recommended that this type of analysis only be attempted for drive cone cavities with no bolts or shielded bolts.
NASA Astrophysics Data System (ADS)
Wu, Ming; Wu, Jianfeng; Wu, Jichun
2017-10-01
When the dense nonaqueous phase liquid (DNAPL) comes into the subsurface environment, its migration behavior is crucially affected by the permeability and entry pressure of subsurface porous media. A prerequisite for accurately simulating DNAPL migration in aquifers is then the determination of the permeability, entry pressure and corresponding representative elementary volumes (REV) of porous media. However, the permeability, entry pressure and corresponding representative elementary volumes (REV) are hard to determine clearly. This study utilizes the light transmission micro-tomography (LTM) method to determine the permeability and entry pressure of two dimensional (2D) translucent porous media and integrates the LTM with a criterion of relative gradient error to quantify the corresponding REV of porous media. As a result, the DNAPL migration in porous media might be accurately simulated by discretizing the model at the REV dimension. To validate the quantification methods, an experiment of perchloroethylene (PCE) migration is conducted in a two-dimensional heterogeneous bench-scale aquifer cell. Based on the quantifications of permeability, entry pressure and REV scales of 2D porous media determined by the LTM and relative gradient error, different models with different sizes of discretization grid are used to simulate the PCE migration. It is shown that the model based on REV size agrees well with the experimental results over the entire migration period including calibration, verification and validation processes. This helps to better understand the microstructures of porous media and achieve accurately simulating DNAPL migration in aquifers based on the REV estimation.
Kane, J.S.
1988-01-01
A study is described that identifies the optimum operating conditions for the accurate determination of Co, Cu, Mn, Ni, Pb, Zn, Ag, Bi and Cd using simultaneous multi-element atomic absorption spectrometry. Accuracy was measured in terms of the percentage recoveries of the analytes based on certified values in nine standard reference materials. In addition to identifying optimum operating conditions for accurate analysis, conditions resulting in serious matrix interferences and the magnitude of the interferences were determined. The listed elements can be measured with acceptable accuracy in a lean to stoicheiometric flame at measurement heights ???5-10 mm above the burner.
NASA Technical Reports Server (NTRS)
Sarracino, Marcello
1941-01-01
The present article deals with what is considered to be a simpler and more accurate method of determining, from the results of bench tests under approved rating conditions, the power at altitude of a supercharged aircraft engine, without application of correction formulas. The method of calculating the characteristics at altitude, of supercharged engines, based on the consumption of air, is a more satisfactory and accurate procedure, especially at low boost pressures.
Wave propagation in equivalent continuums representing truss lattice materials
Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...
2015-07-29
Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less
Magnitude knowledge: the common core of numerical development.
Siegler, Robert S
2016-05-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. © 2016 John Wiley & Sons Ltd.
MacDonald, P M; Kirkpatrick, S W; Sullivan, L A
1996-11-01
Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.
NASA Astrophysics Data System (ADS)
Campana, Claudia; Fidelibus, Maria Dolores
2015-11-01
The gypsum coastal aquifer of Lesina Marina (Puglia, southern Italy) has been affected by sinkhole formation in recent decades. Previous studies based on geomorphologic and hydrogeological data ascribed the onset of collapse phenomena to the erosion of material that fills palaeo-cavities (suffosion sinkholes). The change in the hydrodynamic conditions of groundwater induced by the excavation of a canal within the evaporite formation nearly 100 years ago was identified as the major factor in triggering the erosion, while the contribution of gypsum dissolution was considered negligible. A combined reactive-transport/density-dependent flow model was applied to the gypsum aquifer to evaluate whether gypsum dissolution rate is a dominant or insignificant factor in recent sinkhole formation under current hydrodynamic conditions. The conceptual model was first defined with a set of assumptions based on field and laboratory data along a two-dimensional transect of the aquifer, and then a density-dependent, tide-influenced flow model was set up and solved using the numerical code SEAWAT. Finally, the resulting transient flow field was used by the reactive multicomponent transport model PHT3D to estimate the gypsum dissolution rate. The validation tests show that the model accurately represents the real system, and the multi-disciplinary approach provides consistent information about the causes and evolution time of dissolution processes. The modelled porosity development rate is too low to represent a significant contribution to the recent sinkhole formation in the Lesina Marina area, although it justifies cavity formation and cavity position over geological time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitrani, J
Bayesian networks (BN) are an excellent tool for modeling uncertainties in systems with several interdependent variables. A BN is a directed acyclic graph, and consists of a structure, or the set of directional links between variables that depend on other variables, and conditional probabilities (CP) for each variable. In this project, we apply BN's to understand uncertainties in NIF ignition experiments. One can represent various physical properties of National Ignition Facility (NIF) capsule implosions as variables in a BN. A dataset containing simulations of NIF capsule implosions was provided. The dataset was generated from a radiation hydrodynamics code, and itmore » contained 120 simulations of 16 variables. Relevant knowledge about the physics of NIF capsule implosions and greedy search algorithms were used to search for hypothetical structures for a BN. Our preliminary results found 6 links between variables in the dataset. However, we thought there should have been more links between the dataset variables based on the physics of NIF capsule implosions. Important reasons for the paucity of links are the relatively small size of the dataset, and the sampling of the values for dataset variables. Another factor that might have caused the paucity of links is the fact that in the dataset, 20% of the simulations represented successful fusion, and 80% didn't, (simulations of unsuccessful fusion are useful for measuring certain diagnostics) which skewed the distributions of several variables, and possibly reduced the number of links. Nevertheless, by illustrating the interdependencies and conditional probabilities of several parameters and diagnostics, an accurate and complete BN built from an appropriate simulation set would provide uncertainty quantification for NIF capsule implosions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deichmann, Gregor; Marcon, Valentina; Vegt, Nico F. A. van der, E-mail: vandervegt@csi.tu-darmstadt.de
Molecular simulations of soft matter systems have been performed in recent years using a variety of systematically coarse-grained models. With these models, structural or thermodynamic properties can be quite accurately represented while the prediction of dynamic properties remains difficult, especially for multi-component systems. In this work, we use constraint molecular dynamics simulations for calculating dissipative pair forces which are used together with conditional reversible work (CRW) conservative forces in dissipative particle dynamics (DPD) simulations. The combined CRW-DPD approach aims to extend the representability of CRW models to dynamic properties and uses a bottom-up approach. Dissipative pair forces are derived frommore » fluctuations of the direct atomistic forces between mapped groups. The conservative CRW potential is obtained from a similar series of constraint dynamics simulations and represents the reversible work performed to couple the direct atomistic interactions between the mapped atom groups. Neopentane, tetrachloromethane, cyclohexane, and n-hexane have been considered as model systems. These molecular liquids are simulated with atomistic molecular dynamics, coarse-grained molecular dynamics, and DPD. We find that the CRW-DPD models reproduce the liquid structure and diffusive dynamics of the liquid systems in reasonable agreement with the atomistic models when using single-site mapping schemes with beads containing five or six heavy atoms. For a two-site representation of n-hexane (3 carbons per bead), time scale separation can no longer be assumed and the DPD approach consequently fails to reproduce the atomistic dynamics.« less
NASA Astrophysics Data System (ADS)
Womack, Caroline C.; Neuman, J. Andrew; Veres, Patrick R.; Eilerman, Scott J.; Brock, Charles A.; Decker, Zachary C. J.; Zarzana, Kyle J.; Dube, William P.; Wild, Robert J.; Wooldridge, Paul J.; Cohen, Ronald C.; Brown, Steven S.
2017-05-01
The sum of all reactive nitrogen species (NOy) includes NOx (NO2 + NO) and all of its oxidized forms, and the accurate detection of NOy is critical to understanding atmospheric nitrogen chemistry. Thermal dissociation (TD) inlets, which convert NOy to NO2 followed by NO2 detection, are frequently used in conjunction with techniques such as laser-induced fluorescence (LIF) and cavity ring-down spectroscopy (CRDS) to measure total NOy when set at > 600 °C or speciated NOy when set at intermediate temperatures. We report the conversion efficiency of known amounts of several representative NOy species to NO2 in our TD-CRDS instrument, under a variety of experimental conditions. We find that the conversion efficiency of HNO3 is highly sensitive to the flow rate and the residence time through the TD inlet as well as the presence of other species that may be present during ambient sampling, such as ozone (O3). Conversion of HNO3 at 400 °C, nominally the set point used to selectively convert organic nitrates, can range from 2 to 6 % and may represent an interference in measurement of organic nitrates under some conditions. The conversion efficiency is strongly dependent on the operating characteristics of individual quartz ovens and should be well calibrated prior to use in field sampling. We demonstrate quantitative conversion of both gas-phase N2O5 and particulate ammonium nitrate in the TD inlet at 650 °C, which is the temperature normally used for conversion of HNO3. N2O5 has two thermal dissociation steps, one at low temperature representing dissociation to NO2 and NO3 and one at high temperature representing dissociation of NO3, which produces exclusively NO2 and not NO. We also find a significant interference from partial conversion (5-10 %) of NH3 to NO at 650 °C in the presence of representative (50 ppbv) levels of O3 in dry zero air. Although this interference appears to be suppressed when sampling ambient air, we nevertheless recommend regular characterization of this interference using standard additions of NH3 to TD instruments that convert reactive nitrogen to NO or NO2.
Field Demonstration of Condition Assessment Technologies for Wastewater Collection Systems
Reliable information on pipe condition is needed to accurately estimate the remaining service life of wastewater collection system assets. Although inspections with conventional closed-circuit television (CCTV) have been the mainstay of pipeline condition assessment for decades,...
Terrestrial laser scanning-based bridge structural condition assessment : InTrans project reports.
DOT National Transportation Integrated Search
2016-05-01
Objective, accurate, and fast assessment of a bridges structural condition is critical to the timely assessment of safety risks. : Current practices for bridge condition assessment rely on visual observations and manual interpretation of reports a...
NASA Technical Reports Server (NTRS)
Craidon, C. B.
1983-01-01
A computer program was developed to extend the geometry input capabilities of previous versions of a supersonic zero lift wave drag computer program. The arbitrary geometry input description is flexible enough to describe almost any complex aircraft concept, so that highly accurate wave drag analysis can now be performed because complex geometries can be represented accurately and do not have to be modified to meet the requirements of a restricted input format.
2015-10-30
accurately follow the development of the Black Hawk helicopters , a single main rotor model in NDARC that accurately represented the UH-60A is required. NDARC...Weight changes were based on results from Nixon’s paper, which focused on modeling the structure of a composite rotor blade and using optimization to...conclude that improved composite design to further reduce weight needs to be achieved. An additionally interesting effect is how the rotor technology
Effective and Accurate Colormap Selection
NASA Astrophysics Data System (ADS)
Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.
2016-12-01
Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.
A hybrid method for accurate star tracking using star sensor and gyros.
Lu, Jiazhen; Yang, Lie; Zhang, Hao
2017-10-01
Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.
Analysis of the electromagnetic scattering from an inlet geometry with lossy walls
NASA Technical Reports Server (NTRS)
Myung, N. H.; Pathak, P. H.; Chunang, C. D.
1985-01-01
One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.
Realistic Analytical Polyhedral MRI Phantoms
Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.
2015-01-01
Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724
Borojeni, Azadeh A.T.; Frank-Ito, Dennis O.; Kimbell, Julia S.; Rhee, John S.; Garcia, Guilherme J. M.
2016-01-01
Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction (NAO) patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CBCT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the CBCT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically-accurate models of the nasopharynx created from thirty CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 NAO patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs. patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx. PMID:27525807
Kefal, Adnan; Yildiz, Mehmet
2017-11-30
This paper investigated the effect of sensor density and alignment for three-dimensional shape sensing of an airplane-wing-shaped thick panel subjected to three different loading conditions, i.e., bending, torsion, and membrane loads. For shape sensing analysis of the panel, the Inverse Finite Element Method (iFEM) was used together with the Refined Zigzag Theory (RZT), in order to enable accurate predictions for transverse deflection and through-the-thickness variation of interfacial displacements. In this study, the iFEM-RZT algorithm is implemented by utilizing a novel three-node C°-continuous inverse-shell element, known as i3-RZT. The discrete strain data is generated numerically through performing a high-fidelity finite element analysis on the wing-shaped panel. This numerical strain data represents experimental strain readings obtained from surface patched strain gauges or embedded fiber Bragg grating (FBG) sensors. Three different sensor placement configurations with varying density and alignment of strain data were examined and their corresponding displacement contours were compared with those of reference solutions. The results indicate that a sparse distribution of FBG sensors (uniaxial strain measurements), aligned in only the longitudinal direction, is sufficient for predicting accurate full-field membrane and bending responses (deformed shapes) of the panel, including a true zigzag representation of interfacial displacements. On the other hand, a sparse deployment of strain rosettes (triaxial strain measurements) is essentially enough to produce torsion shapes that are as accurate as those of predicted by a dense sensor placement configuration. Hence, the potential applicability and practical aspects of i3-RZT/iFEM methodology is proven for three-dimensional shape-sensing of future aerospace structures.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-03-15
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.
Liu, Xiang -Yang; Cooper, Michael William D.; McClellan, Kenneth James; ...
2016-10-25
Uranium dioxide (UO 2) is the most commonly used fuel in light-water nuclear reactors and thermal conductivity controls the removal of heat produced by fission, thereby governing fuel temperature during normal and accident conditions. The use of fuel performance codes by the industry to predict operational behavior is widespread. A primary source of uncertainty in these codes is thermal conductivity, and optimized fuel utilization may be possible if existing empirical models are replaced with models that incorporate explicit thermal-conductivity-degradation mechanisms during fuel burn up. This approach is able to represent the degradation of thermal conductivity due to each individual defectmore » type, rather than the overall burn-up measure typically used, which is not an accurate representation of the chemical or microstructure state of the fuel that actually governs thermal conductivity and other properties. To generate a mechanistic thermal conductivity model, molecular dynamics (MD) simulations of UO 2 thermal conductivity including representative uranium and oxygen defects and fission products are carried out. These calculations employ a standard Buckingham-type interatomic potential and a potential that combines the many-body embedded-atom-method potential with Morse-Buckingham pair potentials. Potential parameters for UO 2+x and ZrO 2 are developed for the latter potential. Physical insights from the resonant phonon-spin-scattering mechanism due to spins on the magnetic uranium ions are introduced into the treatment of the MD results, with the corresponding relaxation time derived from existing experimental data. High defect scattering is predicted for Xe atoms compared to that of La and Zr ions. Uranium defects reduce the thermal conductivity more than oxygen defects. For each defect and fission product, scattering parameters are derived for application in both a Callaway model and the corresponding high-temperature model typically used in fuel-performance codes. The model is validated by comparison to low-temperature experimental measurements on single-crystal hyperstoichiometric UO 2+x samples and high-temperature literature data. Furthermore, this work will enable more accurate fuel-performance simulations and will extend to new fuel types and operating conditions, all of which improve the fuel economics of nuclear energy and maintain high fuel reliability and safety.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiang -Yang; Cooper, Michael William D.; McClellan, Kenneth James
Uranium dioxide (UO 2) is the most commonly used fuel in light-water nuclear reactors and thermal conductivity controls the removal of heat produced by fission, thereby governing fuel temperature during normal and accident conditions. The use of fuel performance codes by the industry to predict operational behavior is widespread. A primary source of uncertainty in these codes is thermal conductivity, and optimized fuel utilization may be possible if existing empirical models are replaced with models that incorporate explicit thermal-conductivity-degradation mechanisms during fuel burn up. This approach is able to represent the degradation of thermal conductivity due to each individual defectmore » type, rather than the overall burn-up measure typically used, which is not an accurate representation of the chemical or microstructure state of the fuel that actually governs thermal conductivity and other properties. To generate a mechanistic thermal conductivity model, molecular dynamics (MD) simulations of UO 2 thermal conductivity including representative uranium and oxygen defects and fission products are carried out. These calculations employ a standard Buckingham-type interatomic potential and a potential that combines the many-body embedded-atom-method potential with Morse-Buckingham pair potentials. Potential parameters for UO 2+x and ZrO 2 are developed for the latter potential. Physical insights from the resonant phonon-spin-scattering mechanism due to spins on the magnetic uranium ions are introduced into the treatment of the MD results, with the corresponding relaxation time derived from existing experimental data. High defect scattering is predicted for Xe atoms compared to that of La and Zr ions. Uranium defects reduce the thermal conductivity more than oxygen defects. For each defect and fission product, scattering parameters are derived for application in both a Callaway model and the corresponding high-temperature model typically used in fuel-performance codes. The model is validated by comparison to low-temperature experimental measurements on single-crystal hyperstoichiometric UO 2+x samples and high-temperature literature data. Furthermore, this work will enable more accurate fuel-performance simulations and will extend to new fuel types and operating conditions, all of which improve the fuel economics of nuclear energy and maintain high fuel reliability and safety.« less
Dyer, Joseph J.; Brewer, Shannon K.; Worthington, Thomas A.; Bergey, Elizabeth A.
2013-01-01
1.A major limitation to effective management of narrow-range crayfish populations is the paucity of information on the spatial distribution of crayfish species and a general understanding of the interacting environmental variables that drive current and future potential distributional patterns. 2.Maximum Entropy Species Distribution Modeling Software (MaxEnt) was used to predict the current and future potential distributions of four endemic crayfish species in the Ouachita Mountains. Current distributions were modelled using climate, geology, soils, land use, landform and flow variables thought to be important to lotic crayfish. Potential changes in the distribution were forecast by using models trained on current conditions and projecting onto the landscape predicted under climate-change scenarios. 3.The modelled distribution of the four species closely resembled the perceived distribution of each species but also predicted populations in streams and catchments where they had not previously been collected. Soils, elevation and winter precipitation and temperature most strongly related to current distributions and represented 6587% of the predictive power of the models. Model accuracy was high for all models, and model predictions of new populations were verified through additional field sampling. 4.Current models created using two spatial resolutions (1 and 4.5km2) showed that fine-resolution data more accurately represented current distributions. For three of the four species, the 1-km2 resolution models resulted in more conservative predictions. However, the modelled distributional extent of Orconectes leptogonopodus was similar regardless of data resolution. Field validations indicated 1-km2 resolution models were more accurate than 4.5-km2 resolution models. 5.Future projected (4.5-km2 resolution models) model distributions indicated three of the four endemic species would have truncated ranges with low occurrence probabilities under the low-emission scenario, whereas two of four species would be severely restricted in range under moderatehigh emissions. Discrepancies in the two emission scenarios probably relate to the exclusion of behavioural adaptations from species-distribution models. 6.These model predictions illustrate possible impacts of climate change on narrow-range endemic crayfish populations. The predictions do not account for biotic interactions, migration, local habitat conditions or species adaptation. However, we identified the constraining landscape features acting on these populations that provide a framework for addressing habitat needs at a fine scale and developing targeted and systematic monitoring programmes.
2017-09-01
ER D C/ CH L TR -1 7- 15 Strategic Environmental Research and Development Program Develop Accurate Methods for Characterizing and...current environments. This research will provide more accurate methods for assessing contaminated sediment stability for many DoD and Environmental...47.88026 pascals yards 0.9144 meters ERDC/CHL TR-17-15 xi Executive Summary Objective The proposed research goal is to develop laboratory methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Li; Tunega, Daniel; Xu, Lai
2013-08-29
In a previous study (J. Phys. Chem. C 2011, 115, 12403) cluster models for the TiO2 rutile (110) surface and MP2 calculations were used to develop an analytic potential energy function for dimethyl methylphosphonate (DMMP) interacting with this surface. In the work presented here, this analytic potential and MP2 cluster models are compared with DFT "slab" calculations for DMMP interacting with the TiO2 (110) surface and with DFT cluster models for the TiO2 (110) surface. The DFT slab calculations were performed with the PW91 and PBE functionals. The analytic potential gives DMMP/ TiO2 (110) potential energy curves in excellent agreementmore » with those obtained from the slab calculations. The cluster models for the TiO2 (110) surface, used for the MP2 calculations, were extended to DFT calculations with the B3LYP, PW91, and PBE functional. These DFT calculations do not give DMMP/TiO2 (110) interaction energies which agree with those from the DFT slab calculations. Analyses of the wave functions for these cluster models show that they do not accurately represent the HOMO and LUMO for the surface, which should be 2p and 3d orbitals, respectively, and the models also do not give an accurate band gap. The MP2 cluster models do not accurately represent the LUMO and that they give accurate DMMP/TiO2 (110) interaction energies is apparently fortuitous, arising from their highly inaccurate band gaps. Accurate cluster models, consisting of 7, 10, and 15 Ti-atoms and which have the correct HOMO and LUMO properties, are proposed. The work presented here illustrates the care that must be taken in "constructing" cluster models which accurately model surfaces.« less
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.
NASA Astrophysics Data System (ADS)
Purger, David; McNutt, Todd; Achanta, Pragathi; Quiñones-Hinojosa, Alfredo; Wong, John; Ford, Eric
2009-12-01
The C57BL/6J laboratory mouse is commonly used in neurobiological research. Digital atlases of the C57BL/6J brain have been used for visualization, genetic phenotyping and morphometry, but currently lack the ability to accurately calculate deviations between individual mice. We developed a fully three-dimensional digital atlas of the C57BL/6J brain based on the histology atlas of Paxinos and Franklin (2001 The Mouse Brain in Stereotaxic Coordinates 2nd edn (San Diego, CA: Academic)). The atlas uses triangular meshes to represent the various structures. The atlas structures can be overlaid and deformed to individual mouse MR images. For this study, we selected 18 structures from the histological atlas. Average atlases can be created for any group of mice of interest by calculating the mean three-dimensional positions of corresponding individual mesh vertices. As a validation of the atlas' accuracy, we performed deformable registration of the lateral ventricles to 13 MR brain scans of mice in three age groups: 5, 8 and 9 weeks old. Lateral ventricle structures from individual mice were compared to the corresponding average structures and the original histology structures. We found that the average structures created using our method more accurately represent individual anatomy than histology-based atlases alone, with mean vertex deviations of 0.044 mm versus 0.082 mm for the left lateral ventricle and 0.045 mm versus 0.068 mm for the right lateral ventricle. Our atlas representation gives direct spatial deviations for structures of interest. Our results indicate that MR-deformable histology-based atlases represent an accurate method to obtain accurate morphometric measurements of a population of mice, and that this method may be applied to phenotyping experiments in the future as well as precision targeting of surgical procedures or radiation treatment.
Social learning spreads knowledge about dangerous humans among American crows.
Cornell, Heather N; Marzluff, John M; Pecoraro, Shannon
2012-02-07
Individuals face evolutionary trade-offs between the acquisition of costly but accurate information gained firsthand and the use of inexpensive but possibly less reliable social information. American crows (Corvus brachyrhynchos) use both sources of information to learn the facial features of a dangerous person. We exposed wild crows to a novel 'dangerous face' by wearing a unique mask as we trapped, banded and released 7-15 birds at five study sites near Seattle, WA, USA. An immediate scolding response to the dangerous mask after trapping by previously captured crows demonstrates individual learning, while an immediate response by crows that were not captured probably represents conditioning to the trapping scene by the mob of birds that assembled during the capture. Later recognition of dangerous masks by lone crows that were never captured is consistent with horizontal social learning. Independent scolding by young crows, whose parents had conditioned them to scold the dangerous mask, demonstrates vertical social learning. Crows that directly experienced trapping later discriminated among dangerous and neutral masks more precisely than did crows that learned through social means. Learning enabled scolding to double in frequency and spread at least 1.2 km from the place of origin over a 5 year period at one site.
ASSESSMENT OF EYE LENS DOSES IN INTERVENTIONAL RADIOLOGY: A SIMULATION IN LABORATORY CONDITIONS.
Čemusová, Z; Ekendahl, D; Judas, L
2016-09-01
As workers in interventional radiology belong to one of the most occupationally exposed groups, methods for sufficiently accurate quantification of their external exposure are sought. The objective of the authors' experiment was to investigate the relations between eye lens dose and Hp(10), Hp(3) or Hp(0.07) values measured with a conventional whole-body personal thermoluminescence dosemeter (TLD). Conditions of occupational exposure during common interventional procedures were simulated in laboratory. An anthropomorphic phantom represented a physician. The TLDs were fixed to the phantom in different locations that are common for purposes of personal dosimetry. In order to monitor the dose at the eye lens level during the exposures, a special thermoluminescence eye dosemeter was fixed to the phantom's temple. Correlations between doses measured with the whole-body and the eye dosemeters were found. There are indications that personnel in interventional radiology do not need to be unconditionally equipped with additional eye dosemeters, especially if an appropriate whole-body dosimetry system has been already put into practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lactate Dehydrogenase in Hepatocellular Carcinoma: Something Old, Something New.
Faloppi, Luca; Bianconi, Maristella; Memeo, Riccardo; Casadei Gardini, Andrea; Giampieri, Riccardo; Bittoni, Alessandro; Andrikou, Kalliopi; Del Prete, Michela; Cascinu, Stefano; Scartozzi, Mario
2016-01-01
Hepatocellular carcinoma (HCC) is the most common primary liver tumour (80-90%) and represents more than 5.7% of all cancers. Although in recent years the therapeutic options for these patients have increased, clinical results are yet unsatisfactory and the prognosis remains dismal. Clinical or molecular criteria allowing a more accurate selection of patients are in fact largely lacking. Lactic dehydrogenase (LDH) is a glycolytic key enzyme in the conversion of pyruvate to lactate under anaerobic conditions. In preclinical models, upregulation of LDH has been suggested to ensure both an efficient anaerobic/glycolytic metabolism and a reduced dependence on oxygen under hypoxic conditions in tumour cells. Data from several analyses on different tumour types seem to suggest that LDH levels may be a significant prognostic factor. The role of LDH in HCC has been investigated by different authors in heterogeneous populations of patients. It has been tested as a potential biomarker in retrospective, small, and nonfocused studies in patients undergoing surgery, transarterial chemoembolization (TACE), and systemic therapy. In the major part of these studies, high LDH serum levels seem to predict a poorer outcome. We have reviewed literature in this setting trying to resume basis for future studies validating the role of LDH in this disease.
Biofiltration of high concentration of H2S in waste air under extreme acidic conditions.
Ben Jaber, Mouna; Couvert, Annabelle; Amrane, Abdeltif; Rouxel, Franck; Le Cloirec, Pierre; Dumont, Eric
2016-01-25
Removal of high concentrations of hydrogen sulfide using a biofilter packed with expanded schist under extreme acidic conditions was performed. The impact of various parameters such as H2S concentration, pH changes and sulfate accumulation on the performances of the process was evaluated. Elimination efficiency decreased when the pH was lower than 1 and the sulfate accumulation was more than 12 mg S-SO4(2-)/g dry media, due to a continuous overloading by high H2S concentrations. The influence of these parameters on the degradation of H2S was clearly underlined, showing the need for their control, performed through an increase of watering flow rate. A maximum elimination capacity (ECmax) of 24.7 g m(-3) h(-1) was recorded. As a result, expanded schist represents an interesting packing material to remove high H2S concentration up to 360 ppmv with low pressure drops. In addition, experimental data were fitted using both Michaelis-Menten and Haldane models, showing that the Haldane model described more accurately experimental data since the inhibitory effect of H2S was taken into account. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiguet Jiglaire, Carine, E-mail: carine.jiguet-jiglaire@univ-amu.fr; CRO2, UMR 911, Faculté de Médecine de la Timone, 27 boulevard Jean Moulin, 13284 Marseille Cedex; INSERM, U911, 13005 Marseille
Identification of new drugs and predicting drug response are major challenges in oncology, especially for brain tumors, because total surgical resection is difficult and radiation therapy or chemotherapy is often ineffective. With the aim of developing a culture system close to in vivo conditions for testing new drugs, we characterized an ex vivo three-dimensional culture system based on a hyaluronic acid-rich hydrogel and compared it with classical two-dimensional culture conditions. U87-MG glioblastoma cells and seven primary cell cultures of human glioblastomas were subjected to radiation therapy and chemotherapy drugs. It appears that 3D hydrogel preserves the original cancer growth behaviormore » and enables assessment of the sensitivity of malignant gliomas to radiation and drugs with regard to inter-tumoral heterogeneity of therapeutic response. It could be used for preclinical assessment of new therapies. - Highlights: • We have compared primary glioblastoma cell culture in a 2D versus 3D-matrix system. • In 3D morphology, organization and markers better recapitulate the original tumor. • 3D-matrix culture might represent a relevant system for more accurate drug screening.« less
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
LaBombard, B; Lyons, L
2007-07-01
A new method for the real-time evaluation of the conditions in a magnetized plasma is described. The technique employs an electronic "mirror Langmuir probe" (MLP), constructed from bipolar rf transistors and associated high-bandwidth electronics. Utilizing a three-state bias wave form and active feedback control, the mirror probe's I-V characteristic is continuously adjusted to be a scaled replica of the "actual" Langmuir electrode immersed in a plasma. Real-time high-bandwidth measurements of the plasma's electron temperature, ion saturation current, and floating potential can thereby be obtained using only a single electrode. Initial tests of a prototype MLP system are reported, proving the concept. Fast-switching metal-oxide-semiconductor field-effect transistors produce the required three-state voltage bias wave form, completing a full cycle in under 1 mus. Real-time outputs of electron temperature, ion saturation current, and floating potential are demonstrated, which accurately track an independent computation of these values from digitally stored I-V characteristics. The MLP technique represents a significant improvement over existing real-time methods, eliminating the need for multiple electrodes and sampling all three plasma parameters at a single spatial location.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myint, Philip C.; Benedict, Lorin X.; Belof, Jonathan L.
Here, we present equations of state relevant to conditions encountered in ramp and multiple-shock compression experiments of water. These experiments compress water from ambient conditions to pressures as high as about 14 GPa and temperatures of up to several hundreds of Kelvin. Water may freeze into ice VII during this process. Although there are several studies on the thermodynamic properties of ice VII, an accurate and analytic free energy model from which all other properties may be derived in a thermodynamically consistent manner has not been previously determined. We have developed such a free energy model for ice VII thatmore » is calibrated with pressure-volume-temperature measurements and melt curve data. Furthermore, we show that liquid water in the pressure and temperature range of interest is well-represented by a simple Mie-Grüneisen equation of state. Our liquid water and ice VII equations of state are validated by comparing to sound speed and Hugoniot data. Although they are targeted towards ramp and multiple-shock compression experiments, we demonstrate that our equations of state also behave reasonably well at pressures and temperatures that lie somewhat beyond those found in the experiments.« less
NASA Technical Reports Server (NTRS)
Strode, Sarah A.; Douglass, Anne R.; Ziemke, Jerald R.; Manyin, Michael; Nielsen, J. Eric; Oman, Luke D.
2017-01-01
Satellite observations of in-cloud ozone concentrations from the Ozone Monitoring Instrument and Microwave Limb Sounder instruments show substantial differences from background ozone concentrations. We develop a method for comparing a free-running chemistry-climate model (CCM) to in-cloud and background ozone observations using a simple criterion based on cloud fraction to separate cloudy and clear-sky days. We demonstrate that the CCM simulates key features of the in-cloud versus background ozone differences and of the geographic distribution of in-cloud ozone. Since the agreement is not dependent on matching the meteorological conditions of a specific day, this is a promising method for diagnosing how accurately CCMs represent the relationships between ozone and clouds, including the lower ozone concentrations shown by in-cloud satellite observations. Since clouds are associated with convection as well as changes in chemistry, we diagnose the tendency of tropical ozone at 400 hPa due to chemistry, convection and turbulence, and large-scale dynamics. While convection acts to reduce ozone concentrations at 400 hPa throughout much of the tropics, it has the opposite effect over highly polluted regions of South and East Asia.
Myint, Philip C.; Benedict, Lorin X.; Belof, Jonathan L.
2017-08-28
Here, we present equations of state relevant to conditions encountered in ramp and multiple-shock compression experiments of water. These experiments compress water from ambient conditions to pressures as high as about 14 GPa and temperatures of up to several hundreds of Kelvin. Water may freeze into ice VII during this process. Although there are several studies on the thermodynamic properties of ice VII, an accurate and analytic free energy model from which all other properties may be derived in a thermodynamically consistent manner has not been previously determined. We have developed such a free energy model for ice VII thatmore » is calibrated with pressure-volume-temperature measurements and melt curve data. Furthermore, we show that liquid water in the pressure and temperature range of interest is well-represented by a simple Mie-Grüneisen equation of state. Our liquid water and ice VII equations of state are validated by comparing to sound speed and Hugoniot data. Although they are targeted towards ramp and multiple-shock compression experiments, we demonstrate that our equations of state also behave reasonably well at pressures and temperatures that lie somewhat beyond those found in the experiments.« less
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Creep fatigue life prediction for engine hot section materials (isotropic)
NASA Technical Reports Server (NTRS)
Moreno, V.
1983-01-01
The Hot Section Technology (HOST) program, creep fatigue life prediction for engine hot section materials (isotropic), is reviewed. The program is aimed at improving the high temperature crack initiation life prediction technology for gas turbine hot section components. Significant results include: (1) cast B1900 and wrought IN 718 selected as the base and alternative materials respectively; (2) fatigue test specimens indicated that measurable surface cracks appear early in the specimen lives, i.e., 15% of total life at 871 C and 50% of life at 538 c; (3) observed crack initiation sites are all surface initiated and are associated with either grain boundary carbides or local porosity, transgrannular cracking is observed at the initiation site for all conditions tested; and (4) an initial evaluation of two life prediction models, representative of macroscopic (Coffin-Mason) and more microscopic (damage rate) approaches, was conducted using limited data generated at 871 C and 538 C. It is found that the microscopic approach provides a more accurate regression of the data used to determine crack initiation model constants, but overpredicts the effect of strain rate on crack initiation life for the conditions tested.
Alabama-Mississippi Coastal Classification Maps - Perdido Pass to Cat Island
Morton, Robert A.; Peterson, Russell L.
2005-01-01
The primary purpose of the USGS National Assessment of Coastal Change Project is to provide accurate representations of pre-storm ground conditions for areas that are designated high-priority because they have dense populations or valuable resources that are at risk from storm waves. Another purpose of the project is to develop a geomorphic (land feature) coastal classification that, with only minor modification, can be applied to most coastal regions in the United States. A Coastal Classification Map describing local geomorphic features is the first step toward determining the hazard vulnerability of an area. The Coastal Classification Maps of the National Assessment of Coastal Change Project present ground conditions such as beach width, dune elevations, overwash potential, and density of development. In order to complete a hazard vulnerability assessment, that information must be integrated with other information, such as prior storm impacts and beach stability. The Coastal Classification Maps provide much of the basic information for such an assessment and represent a critical component of a storm-impact forecasting capability. The map above shows the areas covered by this web site. Click on any of the location names or outlines to view the Coastal Classification Map for that area.
Benchmark for Numerical Models of Stented Coronary Bifurcation Flow.
García Carrascal, P; García García, J; Sierra Pallares, J; Castro Ruiz, F; Manuel Martín, F J
2018-09-01
In-stent restenosis ails many patients who have undergone stenting. When the stented artery is a bifurcation, the intervention is particularly critical because of the complex stent geometry involved in these structures. Computational fluid dynamics (CFD) has been shown to be an effective approach when modeling blood flow behavior and understanding the mechanisms that underlie in-stent restenosis. However, these CFD models require validation through experimental data in order to be reliable. It is with this purpose in mind that we performed particle image velocimetry (PIV) measurements of velocity fields within flows through a simplified coronary bifurcation. Although the flow in this simplified bifurcation differs from the actual blood flow, it emulates the main fluid dynamic mechanisms found in hemodynamic flow. Experimental measurements were performed for several stenting techniques in both steady and unsteady flow conditions. The test conditions were strictly controlled, and uncertainty was accurately predicted. The results obtained in this research represent readily accessible, easy to emulate, detailed velocity fields and geometry, and they have been successfully used to validate our numerical model. These data can be used as a benchmark for further development of numerical CFD modeling in terms of comparison of the main flow pattern characteristics.
Joachimsthal, Eva L; Ivanov, Volodymyr; Tay, Joo-Hwa; Tay, Stephen T-L
2003-03-01
Conventional methods for bacteriological testing of water quality take long periods of time to complete. This makes them inappropriate for a shipping industry that is attempting to comply with the International Maritime Organization's anticipated regulations for ballast water discharge. Flow cytometry for the analysis of marine and ship's ballast water is a comparatively fast and accurate method. Compared to a 5% standard error for flow cytometry analysis the standard methods of culturing and epifluorescence analysis have errors of 2-58% and 10-30%, respectively. Also, unlike culturing methods, flow cytometry is capable of detecting both non-viable and viable but non-culturable microorganisms which can still pose health risks. The great variability in both cell concentrations and microbial content for the samples tested is an indication of the difficulties facing microbial monitoring programmes. The concentration of microorganisms in the ballast tank was generally lower than in local seawater. The proportion of aerobic, microaerophilic, and facultative anaerobic microorganisms present appeared to be influenced by conditions in the ballast tank. The gradual creation of anaerobic conditions in a ballast tank could lead to the accumulation of facultative anaerobic microorganisms, which might represent a potential source of pathogenic species.
Force estimation from ensembles of Golgi tendon organs
NASA Astrophysics Data System (ADS)
Mileusnic, M. P.; Loeb, G. E.
2009-06-01
Golgi tendon organs (GTOs) located in the skeletal muscles provide the central nervous system with information about muscle tension. The ensemble firing of all GTO receptors in the muscle has been hypothesized to represent a reliable measure of the whole muscle force but the precision and accuracy of that information are largely unknown because it is impossible to record activity simultaneously from all GTOs in a muscle. In this study, we combined a new mathematical model of force sampling and transduction in individual GTOs with various models of motor unit (MU) organization and recruitment simulating various normal, pathological and neural prosthetic conditions. Our study suggests that in the intact muscle the ensemble GTO activity accurately encodes force information according to a nonlinear, monotonic relationship that has its steepest slope for low force levels and tends to saturate at the highest force levels. The relationship between the aggregate GTO activity and whole muscle tension under some pathological conditions is similar to one seen in the intact muscle during rapidly modulated, phasic excitation of the motor pool (typical for many natural movements) but quite different when the muscle is activated slowly or held at a given force level. Substantial deviations were also observed during simulated functional electrical stimulation.
NASA Astrophysics Data System (ADS)
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Impact of aerosols on ice crystal size
NASA Astrophysics Data System (ADS)
Zhao, Bin; Liou, Kuo-Nan; Gu, Yu; Jiang, Jonathan H.; Li, Qinbin; Fu, Rong; Huang, Lei; Liu, Xiaohong; Shi, Xiangjun; Su, Hui; He, Cenlin
2018-01-01
The interactions between aerosols and ice clouds represent one of the largest uncertainties in global radiative forcing from pre-industrial time to the present. In particular, the impact of aerosols on ice crystal effective radius (Rei), which is a key parameter determining ice clouds' net radiative effect, is highly uncertain due to limited and conflicting observational evidence. Here we investigate the effects of aerosols on Rei under different meteorological conditions using 9-year satellite observations. We find that the responses of Rei to aerosol loadings are modulated by water vapor amount in conjunction with several other meteorological parameters. While there is a significant negative correlation between Rei and aerosol loading in moist conditions, consistent with the "Twomey effect" for liquid clouds, a strong positive correlation between the two occurs in dry conditions. Simulations based on a cloud parcel model suggest that water vapor modulates the relative importance of different ice nucleation modes, leading to the opposite aerosol impacts between moist and dry conditions. When ice clouds are decomposed into those generated from deep convection and formed in situ, the water vapor modulation remains in effect for both ice cloud types, although the sensitivities of Rei to aerosols differ noticeably between them due to distinct formation mechanisms. The water vapor modulation can largely explain the difference in the responses of Rei to aerosol loadings in various seasons. A proper representation of the water vapor modulation is essential for an accurate estimate of aerosol-cloud radiative forcing produced by ice clouds.
Peters-Hall, Jennifer Ruth; Coquelin, Melissa L; Torres, Michael J; LaRanger, Ryan; Alabi, Busola Ruth; Sho, Sei; Calva-Moreno, Jose Francisco; Thomas, Philip J; Shay, Jerry William
2018-05-03
While primary cystic fibrosis (CF) and non-CF human bronchial epithelial basal cells (HBECs) accurately represent in vivo phenotypes, one barrier to their wider use has been a limited ability to clone and expand cells in sufficient numbers to produce rare genotypes using genome editing tools. Recently, conditional reprogramming of cells (CRC) with a ROCK inhibitor and culture on an irradiated fibroblast feeder layer resulted in extension of the lifespan of HBECs, but differentiation capacity and CF transmembrane conductance regulator (CFTR) function decreased as a function of passage. This report details modifications to the standard HBEC CRC protocol (Mod CRC), including the use of bronchial epithelial growth medium instead of F-medium and 2% oxygen instead of 21% oxygen, that extend HBEC lifespan while preserving multipotent differentiation capacity and CFTR function. Critically, Mod CRC conditions support clonal growth of primary HBECs from a single cell and the resulting clonal HBEC population maintains multipotent differentiation capacity, including CFTR function, permitting gene editing of these cells. As a proof of concept, CRISPR/Cas9 genome editing and cloning was used to introduce insertions/deletions in CFTR exon 11. Mod CRC conditions overcome many barriers to the expanded use of HBECs for basic research and drug screens. Importantly, Mod CRC conditions support the creation of isogenic cell lines in which CFTR is mutant or wild-type in the same genetic background with no history of CF to enable determination of the primary defects of mutant CFTR.
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
Methods for characterizing convective cryoprobe heat transfer in ultrasound gel phantoms.
Etheridge, Michael L; Choi, Jeunghwan; Ramadhyani, Satish; Bischof, John C
2013-02-01
While cryosurgery has proven capable in treating of a variety of conditions, it has met with some resistance among physicians, in part due to shortcomings in the ability to predict treatment outcomes. Here we attempt to address several key issues related to predictive modeling by demonstrating methods for accurately characterizing heat transfer from cryoprobes, report temperature dependent thermal properties for ultrasound gel (a convenient tissue phantom) down to cryogenic temperatures, and demonstrate the ability of convective exchange heat transfer boundary conditions to accurately describe freezing in the case of single and multiple interacting cryoprobe(s). Temperature dependent changes in the specific heat and thermal conductivity for ultrasound gel are reported down to -150 °C for the first time here and these data were used to accurately describe freezing in ultrasound gel in subsequent modeling. Freezing around a single and two interacting cryoprobe(s) was characterized in the ultrasound gel phantom by mapping the temperature in and around the "iceball" with carefully placed thermocouple arrays. These experimental data were fit with finite-element modeling in COMSOL Multiphysics, which was used to investigate the sensitivity and effectiveness of convective boundary conditions in describing heat transfer from the cryoprobes. Heat transfer at the probe tip was described in terms of a convective coefficient and the cryogen temperature. While model accuracy depended strongly on spatial (i.e., along the exchange surface) variation in the convective coefficient, it was much less sensitive to spatial and transient variations in the cryogen temperature parameter. The optimized fit, convective exchange conditions for the single-probe case also provided close agreement with the experimental data for the case of two interacting cryoprobes, suggesting that this basic characterization and modeling approach can be extended to accurately describe more complicated, multiprobe freezing geometries. Accurately characterizing cryoprobe behavior in phantoms requires detailed knowledge of the freezing medium's properties throughout the range of expected temperatures and an appropriate description of the heat transfer across the probe's exchange surfaces. Here we demonstrate that convective exchange boundary conditions provide an accurate and versatile description of heat transfer from cryoprobes, offering potential advantages over the traditional constant surface heat flux and constant surface temperature descriptions. In addition, although this study was conducted on Joule-Thomson type cryoprobes, the general methodologies should extend to any probe that is based on convective exchange with a cryogenic fluid.
An approach for delineating drinking water wellhead protection areas at the Nile Delta, Egypt.
Fadlelmawla, Amr A; Dawoud, Mohamed A
2006-04-01
In Egypt, production has a high priority. To this end protecting the quality of the groundwater, specifically when used for drinking water, and delineating protection areas around the drinking water wellheads for strict landuse restrictions is essential. The delineation methods are numerous; nonetheless, the uniqueness of the hydrogeological, institutional as well as social conditions in the Nile Delta region dictate a customized approach. The analysis of the hydrological conditions and land ownership at the Nile Delta indicates the need for an accurate methodology. On the other hand, attempting to calculate the wellhead protected areas around each of the drinking wells (more than 1500) requires data, human resources, and time that exceed the capabilities of the groundwater management agency. Accordingly, a combination of two methods (simplified variable shapes and numerical modeling) was adopted. Sensitivity analyses carried out using hypothetical modeling conditions have identified the pumping rate, clay thickness, hydraulic gradient, vertical conductivity of the clay, and the hydraulic conductivity as the most significant parameters in determining the dimensions of the wellhead protection areas (WHPAs). Tables of sets of WHPAs dimensions were calculated using synthetic modeling conditions representing the most common ranges of the significant parameters. Specific WHPA dimensions can be calculated by interpolation, utilizing the produced tables along with the operational and hydrogeological conditions for the well under consideration. In order to simplify the interpolation of the appropriate dimensions of the WHPAs from the calculated tables, an interactive computer program was written. The program accepts the real time data of the significant parameters as its input, and gives the appropriate WHPAs dimensions as its output.
Hassan, Shirin E
2012-05-04
The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.
Hassan, Shirin E.
2012-01-01
Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593
Happy but overconfident: positive affect leads to inaccurate metacomprehension.
Prinz, Anja; Bergmann, Viktoria; Wittwer, Jörg
2018-05-14
When learning from text, it is important that learners not only comprehend the information provided but also accurately monitor and judge their comprehension, which is known as metacomprehension accuracy. To investigate the role of a learner's affective state for text comprehension and metacomprehension accuracy, we conducted an experiment with N = 103 university students in whom we induced positive, negative, or neutral affect. Positive affect resulted in poorer text comprehension than neutral affect. Positive affect also led to overconfident predictions, whereas negative and neutral affect were both associated with quite accurate predictions. Independent of affect, postdictions were rather underconfident. The results suggest that positive affect bears processing disadvantages for achieving deep comprehension and adequate prediction accuracy. Given that postdictions were more accurate, practice tests might represent an effective instructional method to help learners in a positive affective state to accurately judge their text comprehension.
Particles, Feynman Diagrams and All That
ERIC Educational Resources Information Center
Daniel, Michael
2006-01-01
Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.
ENHANCING HSPF MODEL CHANNEL HYDRAULIC REPRESENTATION
The Hydrological Simulation Program - FORTRAN (HSPF) is a comprehensive watershed model, which employs depth-area-volume-flow relationships known as hydraulic function table (FTABLE) to represent stream channel cross-sections and reservoirs. An accurate FTABLE determination for a...
Thermodynamic properties of non-conformal soft-sphere fluids with effective hard-sphere diameters.
Rodríguez-López, Tonalli; del Río, Fernando
2012-01-28
In this work we study a set of soft-sphere systems characterised by a well-defined variation of their softness. These systems represent an extension of the repulsive Lennard-Jones potential widely used in statistical mechanics of fluids. This type of soft spheres is of interest because they represent quite accurately the effective intermolecular repulsion in fluid substances and also because they exhibit interesting properties. The thermodynamics of the soft-sphere fluids is obtained via an effective hard-sphere diameter approach that leads to a compact and accurate equation of state. The virial coefficients of soft spheres are shown to follow quite simple relationships that are incorporated into the equation of state. The approach followed exhibits the rescaling of the density that produces a unique equation for all systems and temperatures. The scaling is carried through to the level of the structure of the fluids.
Ballistics and anatomical modelling - A review.
Humphrey, Caitlin; Kumaratilake, Jaliya
2016-11-01
Ballistics is the study of a projectiles motion and can be broken down into four stages: internal, intermediate, external and terminal ballistics. The study of the effects a projectile has on a living tissue is referred to as wound ballistics and falls within terminal ballistics. To understand the effects a projectile has on living tissues the mechanisms of wounding need to be understood. These include the permanent and temporary cavities, energy, yawing, tumbling and fragmenting. Much ballistics research has been conducted including using cadavers, animal models and simulants such as ballistics ordnance gelatine. Further research is being conducted into developing anatomical, 3D, experimental and computational models. However, these models need to accurately represent the human body and its heterogeneous nature which involves understanding the biomechanical properties of the different tissues and organs. Further research is needed to accurately represent the human tissues with simulants and is slowly being conducted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A review of virtual cutting methods and technology in deformable objects.
Wang, Monan; Ma, Yuzheng
2018-06-05
Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.
Jaccard, James; Dodge, Tonya; Guilamo-Ramos, Vincent
2005-03-01
The present study explores 2 key variables in social metacognition: perceived intelligence and perceived levels of knowledge about a specific content domain. The former represents a judgment of one's knowledge at an abstract level, whereas the latter represents a judgment of one's knowledge in a specific content domain. Data from interviews of approximately 8,411 female adolescents from a national sample were analyzed in a 2-wave panel design with a year between assessments. Higher levels of perceived intelligence at Wave 1 were associated with a lower probability of the occurrence of a pregnancy over the ensuing year independent of actual IQ, self-esteem, and academic aspirations. Higher levels of perceived knowledge about the accurate use of birth control were associated with a higher probability of the occurrence of a pregnancy independent of actual knowledge about accurate use, perceived intelligence, self-esteem, and academic aspirations.
Reliable information on pipe condition is needed to accurately estimate the remaining service life of wastewater collection system assets. Although inspections with conventional closed-circuit television (CCTV) have been the mainstay of pipeline condition assessment for decades,...
Coded aperture imaging with self-supporting uniformly redundant arrays
Fenimore, Edward E.
1983-01-01
A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.
Perturbation theory corrections to the two-particle reduced density matrix variational method.
Juhasz, Tamas; Mazziotti, David A
2004-07-15
In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.
Simultaneous point-of-care detection of anemia and sickle cell disease in Tanzania: the RAPID study.
Smart, Luke R; Ambrose, Emmanuela E; Raphael, Kevin C; Hokororo, Adolfine; Kamugisha, Erasmus; Tyburski, Erika A; Lam, Wilbur A; Ware, Russell E; McGann, Patrick T
2018-02-01
Both anemia and sickle cell disease (SCD) are highly prevalent across sub-Saharan Africa, and limited resources exist to diagnose these conditions quickly and accurately. The development of simple, inexpensive, and accurate point-of-care (POC) assays represents an important advance for global hematology, one that could facilitate timely and life-saving medical interventions. In this prospective study, Robust Assays for Point-of-care Identification of Disease (RAPID), we simultaneously evaluated a POC immunoassay (Sickle SCAN™) to diagnose SCD and a first-generation POC color-based assay to detect anemia. Performed at Bugando Medical Center in Mwanza, Tanzania, RAPID tested 752 participants (age 1 day to 20 years) in four busy clinical locations. With minimally trained medical staff, the SCD POC assay diagnosed SCD with 98.1% sensitivity and 91.1% specificity. The hemoglobin POC assay had 83.2% sensitivity and 74.5% specificity for detection of severe anemia (Hb ≤ 7 g/dL). Interobserver agreement was excellent for both POC assays (r = 0.95-0.96). Results for the hemoglobin POC assay have informed the second-generation assay design to be more suitable for low-resource settings. RAPID provides practical feasibility data regarding two novel POC assays for the diagnosis of anemia and SCD in real-world field evaluations and documents the utility and potential impact of these POC assays for sub-Saharan Africa.
Temperature-Dependent Kinetic Model for Nitrogen-Limited Wine Fermentations▿
Coleman, Matthew C.; Fish, Russell; Block, David E.
2007-01-01
A physical and mathematical model for wine fermentation kinetics was adapted to include the influence of temperature, perhaps the most critical factor influencing fermentation kinetics. The model was based on flask-scale white wine fermentations at different temperatures (11 to 35°C) and different initial concentrations of sugar (265 to 300 g/liter) and nitrogen (70 to 350 mg N/liter). The results show that fermentation temperature and inadequate levels of nitrogen will cause stuck or sluggish fermentations. Model parameters representing cell growth rate, sugar utilization rate, and the inactivation rate of cells in the presence of ethanol are highly temperature dependent. All other variables (yield coefficient of cell mass to utilized nitrogen, yield coefficient of ethanol to utilized sugar, Monod constant for nitrogen-limited growth, and Michaelis-Menten-type constant for sugar transport) were determined to vary insignificantly with temperature. The resulting mathematical model accurately predicts the observed wine fermentation kinetics with respect to different temperatures and different initial conditions, including data from fermentations not used for model development. This is the first wine fermentation model that accurately predicts a transition from sluggish to normal to stuck fermentations as temperature increases from 11 to 35°C. Furthermore, this comprehensive model provides insight into combined effects of time, temperature, and ethanol concentration on yeast (Saccharomyces cerevisiae) activity and physiology. PMID:17616615
Fan, Rong; Ebrahimi, Mehrdad; Quitmann, Hendrich; Aden, Matthias; Czermak, Peter
2016-01-01
Accurate real-time process control is necessary to increase process efficiency, and optical sensors offer a competitive solution because they provide diverse system information in a noninvasive manner. We used an innovative scattered light sensor for the online monitoring of biomass during lactic acid production in a membrane bioreactor system because biomass determines productivity in this type of process. The upper limit of the measurement range in fermentation broth containing Bacillus coagulans was ~2.2 g·L−1. The specific cell growth rate (µ) during the exponential phase was calculated using data representing the linear range (cell density ≤ 0.5 g·L−1). The results were consistently and reproducibly more accurate than offline measurements of optical density and cell dry weight, because more data were gathered in real-time over a shorter duration. Furthermore, µmax was measured under different filtration conditions (transmembrane pressure 0.3–1.2 bar, crossflow velocity 0.5–1.5 m·s−1), showing that energy input had no significant impact on cell growth. Cell density was monitored using the sensor during filtration and was maintained at a constant level by feeding with glucose according to the fermentation kinetics. Our novel sensor is therefore suitable for integration into control strategies for continuous fermentation in membrane bioreactor systems. PMID:27007380
Fast and robust segmentation of white blood cell images by self-supervised learning.
Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo
2018-04-01
A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.
Robichaud, Alain; Ménard, Richard; Zaïtseva, Yulia; Anselmo, David
2016-01-01
Air quality, like weather, can affect everyone, but responses differ depending on the sensitivity and health condition of a given individual. To help protect exposed populations, many countries have put in place real-time air quality nowcasting and forecasting capabilities. We present in this paper an optimal combination of air quality measurements and model outputs and show that it leads to significant improvements in the spatial representativeness of air quality. The product is referred to as multi-pollutant surface objective analyses (MPSOAs). Moreover, based on MPSOA, a geographical mapping of the Canadian Air Quality Health Index (AQHI) is also presented which provides users (policy makers, public, air quality forecasters, and epidemiologists) with a more accurate picture of the health risk anytime and anywhere in Canada and the USA. Since pollutants can also behave as passive atmospheric tracers, they provide information about transport and dispersion and, hence, reveal synoptic and regional meteorological phenomena. MPSOA could also be used to build air pollution climatology, compute local and national trends in air quality, and detect systematic biases in numerical air quality (AQ) models. Finally, initializing AQ models at regular time intervals with MPSOA can produce more accurate air quality forecasts. It is for these reasons that the Canadian Meteorological Centre (CMC) in collaboration with the Air Quality Research Division (AQRD) of Environment Canada has recently implemented MPSOA in their daily operations.
Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter
2016-01-01
Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.
2013-12-01
Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.
Evaluation of the New B-REX Fatigue Testing System for Multi-Megawatt Wind Turbine Blades: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D.; Musial, W.; Engberg, S.
2004-12-01
The National Renewable Energy Laboratory (NREL) recently developed a new hybrid fatigue testing system called the Blade Resonance Excitation (B-REX) test system. The new system uses 65% less energy to test large wind turbine blades in half the time of NREL's dual-axis forced-displacement test method with lower equipment and operating costs. The B-REX is a dual-axis test system that combines resonance excitation with forced hydraulic loading to reduce the total test time required while representing the operating strains on the critical inboard blade stations more accurately than a single-axis test system. The analysis and testing required to fully implement themore » B-REX was significant. To control unanticipated blade motion and vibrations caused by dynamic coupling between the flap, lead-lag, and torsional directions, we needed to incorporate additional test hardware and control software. We evaluated the B-REX test system under stable operating conditions using a combination of various sensors. We then compared our results with results from the same blade, tested previously using NREL's dual-axis forced-displacement test method. Experimental results indicate that strain levels produced by the B-REX system accurately replicated the forced-displacement method. This paper describes the challenges we encountered while developing the new blade fatigue test system and the experimental results that validate its accuracy.« less
Mechanosensation and the Primary Cilium
NASA Astrophysics Data System (ADS)
Glaser, Joseph; Resnick, Andrew
2010-10-01
The primary cilium has come under increased scrutiny as a site for mechano- and chemosensation by cells. We have undertaken a program of study using mouse renal cell lines from the cortical collecting duct to quantify how mechanical forces arising from fluid shear are transduced into cellular responses. Fluid flow through a model nephron has been analyzed to determine the in vivo forces. A novel tissue culture flow chamber permitting accurate reproduction of physiologically relevant conditions has been calibrated. We have determined that in vivo conditions can be accurately modeled in our flow chamber.
Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki
2015-01-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape–location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape–location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. PMID:26277022
Okada, Toshiyuki; Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki; Sato, Yoshinobu
2015-12-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Petroze, Robin T; Joharifard, Shahrzad; Groen, Reinou S; Niyonkuru, Francine; Ntaganda, Edmond; Kushner, Adam L; Guterbock, Thomas M; Kyamanywa, Patrick; Calland, J Forrest
2015-01-01
Disparities in access to quality injury care are a growing concern worldwide, with over 90 % of global injury-related morbidity and mortality occurring in low-income countries. We describe the use of a survey tool that evaluates the prevalence of surgical conditions at the population level, with a focus on the burden of traumatic injuries, subsequent disabilities, and barriers to injury care in Rwanda. The Surgeons OverSeas Assessment of Surgical Need (SOSAS) tool is a cross-sectional, cluster-based population survey designed to measure conditions that may necessitate surgical consultation or intervention. Questions are structured anatomically and designed around a representative spectrum of surgical conditions. Households in Rwanda were sampled using two-stage cluster sampling, and interviews were conducted over a one-month period in 52 villages nationwide, with representation of all 30 administrative districts. Injury-related results were descriptively analyzed and population-weighted by age and gender. A total of 1,627 households (3,175 individuals) were sampled; 1,185 lifetime injury-related surgical conditions were reported, with 38 % resulting in some form of perceived disability. Of the population, 27.4 % had ever had a serious injury-related condition, with 2.8 % having an injury-related condition at the time of interview. Over 30 % of household deaths in the previous year may have been surgically treatable, but only 4 % were injury-related. Determining accurate injury and disability burden is crucial to health system planning in low-income countries. SOSAS is a useful survey for determining injury epidemiology at the community level, which can in turn help to plan prevention efforts and optimize provision of care.
Necrotizing Fasciitis and The Diabetic Foot.
Iacopi, Elisabetta; Coppelli, Alberto; Goretti, Chiara; Piaggesi, Alberto
2015-12-01
Necrotizing fasciitis (NF) represents a rapidly progressive, life-threatening infection involving skin, soft tissue, and deep fascia. An early diagnosis is crucial to treat NF effectively. The disease is generally due to an external trauma that occurs in predisposed patients: the most important risk factor is represented by diabetes mellitus. NF is classified into 3 different subtypes according to bacterial strains responsible: type 1 associated to polymicrobial infection, type 2 NF, generally associated to Streptococcus species, often associated to Staphylococcus aureus and, eventually, Type 3, due to Gram-negative strains, such as Clostridium difficile or Vibrio. NF is usually characterized by the presence of the classic triad of symptoms: local pain, swelling, and erythema. In daily clinical practice immune-compromised or neuropathic diabetic patients present with atypical symptomatology. This explains the high percentage of misdiagnosed cases in the emergency department and, consequently, the worse outcome presented by these patients. Prompt aggressive surgical debridement and antibiotic systemic therapy are the cornerstone of its treatment. These must be associated with an accurate systemic management, consisting in nutritional support, glycemic compensation, and hemodynamic stabilization. Innovative methods, such as negative pressure therapy, once the acute conditions have resolved, can help fasten the surgical wound closure. Prompt management can improve prognosis of patients affected from NF reducing limb loss and saving lives. © The Author(s) 2015.
Recommendations for ICT use in Alzheimer's disease assessment: Monaco CTAD Expert Meeting.
Robert, P H; Konig, A; Andrieu, S; Bremond, F; Chemin, I; Chung, P C; Dartigues, J F; Dubois, B; Feutren, G; Guillemaud, R; Kenisberg, P A; Nave, S; Vellas, B; Verhey, F; Yesavage, J; Mallea, P
2013-01-01
Alzheimer disease (AD) and other related dementia represent a major challenge for health care systems within the aging population. It is therefore important to develop better instruments for assessing disease severity and disease progression to optimize patient's care and support to care providers, and also provide better tools for clinical research. In this area, Information and Communication Technologies (ICT) are of particular interest. Such techniques enable accurate and standardized assessments of patients' performance and actions in real time and real life situations. The aim of this article is to provide basic recommendation concerning the development and the use of ICT for Alzheimer's disease and related disorders. During he ICT and Mental Health workshop (CTAD meeting held in Monaco on the 30th October 2012) an expert panel was set up to prepare the first recommendations for the use of ICT in dementia research. The expert panel included geriatrician, epidemiologist, neurologist, psychiatrist, psychologist, ICT engineers, representatives from the industry and patient association. The recommendations are divided into three sections corresponding to 1/ the clinical targets of interest for the use of ICT, 2/ the conditions, the type of sensors and the outputs (scores) that could be used and obtained, 3/ finally the last section concerns specifically the use of ICT within clinical trials.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
A novel Gaussian-Sinc mixed basis set for electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.
2015-08-14
A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less
NASA Astrophysics Data System (ADS)
Sun, Zhiyong; Hao, Lina; Song, Bo; Yang, Ruiguo; Cao, Ruimin; Cheng, Yu
2016-10-01
Micro/nano positioning technologies have been attractive for decades for their various applications in both industrial and scientific fields. The actuators employed in these technologies are typically smart material actuators, which possess inherent hysteresis that may cause systems behave unexpectedly. Periodic reference tracking capability is fundamental for apparatuses such as scanning probe microscope, which employs smart material actuators to generate periodic scanning motion. However, traditional controller such as PID method cannot guarantee accurate fast periodic scanning motion. To tackle this problem and to conduct practical implementation in digital devices, this paper proposes a novel control method named discrete extended unparallel Prandtl-Ishlinskii model based internal model (d-EUPI-IM) control approach. To tackle modeling uncertainties, the robust d-EUPI-IM control approach is investigated, and the associated sufficient stabilizing conditions are derived. The advantages of the proposed controller are: it is designed and represented in discrete form, thus practical for digital devices implementation; the extended unparallel Prandtl-Ishlinskii model can precisely represent forward/inverse complex hysteretic characteristics, thus can reduce modeling uncertainties and benefits controllers design; in addition, the internal model principle based control module can be utilized as a natural oscillator for tackling periodic references tracking problem. The proposed controller was verified through comparative experiments on a piezoelectric actuator platform, and convincing results have been achieved.
NASA Astrophysics Data System (ADS)
Keatings, K. W.; Heaton, T. H. E.; Holmes, J. A.
2002-05-01
Carbon and oxygen isotope analysis of ostracods living in the near-constant conditions of spring-fed ponds in southern England allowed accurate determination of the ostracod's calcite-water 13C/12C and 18O/16O fractionations. The 13C/12C fractionations of two species, Candona candida and Pseudocandona rostrata, correspond to values expected for isotopic equilibrium with the pond's dissolved inorganic carbon at the measured temperature (11°C) and pH (6.9), whilst those of a third species, Herpetocypris reptans, would represent equilibrium at a slightly higher pH (7.1). The 18O/16O fractionations confirm two previous studies in being larger, by up to 3‰, than those 'traditionally' regarded as representing equilibrium. When the measured fractionations are considered in the context of more recent work, however, they can be explained in terms of equilibrium if the process of calcite formation at the ostracod lamella occurs at a relatively low pH (≤7) irrespective of the pH of the surrounding water. The pH of calcite formation, and therefore the calcite-water 18O/16O fractionation, may be species and stage (adult versus juvenile) specific, and related to the rate of calcification.