Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Gas leak detection in infrared video with background modeling
NASA Astrophysics Data System (ADS)
Zeng, Xiaoxia; Huang, Likun
2018-03-01
Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.
1979-01-01
A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
NASA Astrophysics Data System (ADS)
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
ERIC Educational Resources Information Center
Singaram, Veena S.; van der Vleuten, Cees P. M; Muijtjens, Arno M. M.; Dolmans, Diana H. J. M
2012-01-01
Little is known about the influence of language background in problem-based learning (PBL) tutorial groups on group processes and students' academic achievement. This study investigated the relationship between language background, secondary school score, tutorial group processes, and students' academic achievement in PBL. A validated tutorial…
ERIC Educational Resources Information Center
Jeong, So-Hee; Eamon, Mary Keegan
2009-01-01
Using data from a national sample of two-parent families with 11- and 12-year-old youths (N = 591), we tested a structural model of family background, family process (marital conflict and parenting), youth self-control, and delinquency four years later. Consistent with the conceptual model, marital conflict and youth self-control are directly…
Implementation of a GPS-RO data processing system for the KIAPS-LETKF data assimilation system
NASA Astrophysics Data System (ADS)
Kwon, H.; Kang, J.-S.; Jo, Y.; Kang, J. H.
2014-11-01
The Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing a new global numerical weather prediction model and an advanced data assimilation system. As part of the KIAPS Package for Observation Processing (KPOP) system for data assimilation, preprocessing and quality control modules for bending angle measurements of global positioning system radio occultation (GPS-RO) data have been implemented and examined. GPS-RO data processing system is composed of several steps for checking observation locations, missing values, physical values for Earth radius of curvature, and geoid undulation. An observation-minus-background check is implemented by use of a one-dimensional observational bending angle operator and tangent point drift is also considered in the quality control process. We have tested GPS-RO observations utilized by the Korean Meteorological Administration (KMA) within KPOP, based on both the KMA global model and the National Center for Atmospheric Research (NCAR) Community Atmosphere Model-Spectral Element (CAM-SE) as a model background. Background fields from the CAM-SE model are incorporated for the preparation of assimilation experiments with the KIAPS-LETKF data assimilation system, which has been successfully implemented to a cubed-sphere model with fully unstructured quadrilateral meshes. As a result of data processing, the bending angle departure statistics between observation and background shows significant improvement. Also, the first experiment in assimilating GPS-RO bending angle resulting from KPOP within KIAPS-LETKF shows encouraging results.
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Implementation of a GPS-RO data processing system for the KIAPS-LETKF data assimilation system
NASA Astrophysics Data System (ADS)
Kwon, H.; Kang, J.-S.; Jo, Y.; Kang, J. H.
2015-03-01
The Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing a new global numerical weather prediction model and an advanced data assimilation system. As part of the KIAPS package for observation processing (KPOP) system for data assimilation, preprocessing, and quality control modules for bending-angle measurements of global positioning system radio occultation (GPS-RO) data have been implemented and examined. The GPS-RO data processing system is composed of several steps for checking observation locations, missing values, physical values for Earth radius of curvature, and geoid undulation. An observation-minus-background check is implemented by use of a one-dimensional observational bending-angle operator, and tangent point drift is also considered in the quality control process. We have tested GPS-RO observations utilized by the Korean Meteorological Administration (KMA) within KPOP, based on both the KMA global model and the National Center for Atmospheric Research Community Atmosphere Model with Spectral Element dynamical core (CAM-SE) as a model background. Background fields from the CAM-SE model are incorporated for the preparation of assimilation experiments with the KIAPS local ensemble transform Kalman filter (LETKF) data assimilation system, which has been successfully implemented to a cubed-sphere model with unstructured quadrilateral meshes. As a result of data processing, the bending-angle departure statistics between observation and background show significant improvement. Also, the first experiment in assimilating GPS-RO bending angle from KPOP within KIAPS-LETKF shows encouraging results.
Measurement of Neutrino-Induced Coherent Pion Production and the Diffractive Background in MINERvA
NASA Astrophysics Data System (ADS)
Gomez, Alicia; Minerva Collaboration
2015-04-01
Neutrino-induced coherent charged pion production is a unique neutrino-nucleus scattering process in which a muon and pion are produced while the nucleus is left in its ground state. The MINERvA experiment has made a model-independent differential cross section measurement of this process on carbon by selecting events with a muon and a pion, no evidence of nuclear break-up, and small momentum transfer to the nucleus | t | . A similar process which is a background to the measurement on carbon is diffractive pion production off the free protons in MINERvA's scintillator. This process is not modeled in the neutrino event generator GENIE. At low | t | these events have a similar final state to the aforementioned process. A study to quantify this diffractive event contribution to the background is done by emulating these diffractive events by reweighting all other GENIE-generated background events to the predicted | t | distribution of diffractive events, and then scaling to the diffractive cross section.
ERIC Educational Resources Information Center
Shen, Jianping; Leslie, Jeffrey M.; Spybrook, Jessaca K.; Ma, Xin
2012-01-01
Using nationally representative samples for public school teachers and principals, the authors inquired into whether principal background and school processes are related to teacher job satisfaction. Employing hierarchical linear modeling (HLM), the authors were able to control for background characteristics at both the teacher and school levels.…
Processing, Properties and Arc Jet Testing of HfB2/SiC
NASA Technical Reports Server (NTRS)
Johnson, Sylvia M.; Beckman, Sarah; Irby, Edward; Ellerby, Don; Gasch, Matt; Gusman, Michael
2004-01-01
Contents include the following: Background on Ultra High Temperature Ceramics - UHTCs. Summary UNTC processing: power processing, scale-up. Preliminary material properties: mechanical, thermal. Arc jet testing: flat face models, cone models. Summary.
Stacked Multilayer Self-Organizing Map for Background Modeling.
Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun
2015-09-01
In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach.
Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan
2017-08-01
We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.
ERIC Educational Resources Information Center
Luguetti, Carla; Oliver, Kimberly L.; Dantas, Luiz E. P. B. T.; Kirk, David
2017-01-01
Purpose: This study discusses the process of co-constructing a prototype pedagogical model for working with youth from socially vulnerable backgrounds. Participants and settings: This six-month activist research project was conducted in a soccer program in a socially vulnerable area of Brazil in 2013. The study included 17 youths, 4 coaches, a…
Forest forming process and dynamic vegetation models under global change
A. Shvidenko; E. Gustafson
2009-01-01
The paper analyzes mathematical models that are used to project the dynamics of forest ecosystems on different spatial and temporal scales. Landscape disturbance and succession models (LDSMs) are of a particular interest for studying the forest forming process in Northern Eurasia. They have a solid empirical background and are able to model ecological processes under...
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models
Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...
Reference analysis of the signal + background model in counting experiments
NASA Astrophysics Data System (ADS)
Casadei, D.
2012-01-01
The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.
Generative electronic background music system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazurowski, Lukasz
In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.
Advanced GaAs Process Modeling. Volume 1
1989-05-01
COSATI CODES 18 . SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Gallium Arsenide, MESFET, Process...Background 9 3.2 Model Calculations 10 3.3 Conclusions 17 IV. ION-IMPLANTATION INTO GaAs PROFILE DETERMINATION 18 4.1 Ion Implantation Profile...Determination in GaAs 18 4.1.1. Background 18 4.1.2. Experimental Measurements 20 4.1.3. Results 22 4.1.3.1 Ion-Energy Dependence 22 4.1.3.2. Tilt and Rotation
Poisson mixture model for measurements using counting.
Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz
2010-03-01
Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
RenderView: physics-based multi- and hyperspectral rendering using measured background panoramics
NASA Astrophysics Data System (ADS)
Talcott, Denise M.; Brown, Wade W.; Thomas, David J.
2003-09-01
As part of the survivability engineering process it is necessary to accurately model and visualize the vehicle signatures in multi- or hyperspectral bands of interest. The signature at a given wavelength is a function of the surface optical properties, reflection of the background and, in the thermal region, the emission of thermal radiation. Currently, it is difficult to obtain and utilize background models that are of sufficient fidelity when compared with the vehicle models. In addition, the background models create an additional layer of uncertainty in estimating the vehicles signature. Therefore, to meet exacting rendering requirements we have developed RenderView, which incorporates the full bidirectional reflectance distribution function (BRDF). Instead of using a modeled background we have incorporated a measured calibrated background panoramic image to provide the high fidelity background interaction. Uncertainty in the background signature is reduced to the error in the measurement which is considerably smaller than the uncertainty inherent in a modeled background. RenderView utilizes a number of different descriptions of the BRDF, including the Sandford-Robertson. In addition, it provides complete conservation of energy with off axis sampling. A description of RenderView will be presented along with a methodology developed for collecting background panoramics. Examples of the RenderView output and the background panoramics will be presented along with our approach to handling the solar irradiance problem.
Physics at a 100 TeV pp Collider: Standard Model Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mangano, M. L.; Zanderighi, G.; Aguilar Saavedra, J. A.
This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.
Synthetic aperture radar and digital processing: An introduction
NASA Technical Reports Server (NTRS)
Dicenzo, A.
1981-01-01
A tutorial on synthetic aperture radar (SAR) is presented with emphasis on digital data collection and processing. Background information on waveform frequency and phase notation, mixing, Q conversion, sampling and cross correlation operations is included for clarity. The fate of a SAR signal from transmission to processed image is traced in detail, using the model of a single bright point target against a dark background. Some of the principal problems connected with SAR processing are also discussed.
Rocks in the River: The Challenge of Piloting the Inquiry Process in Today's Learning Environment
ERIC Educational Resources Information Center
Lambusta, Patrice; Graham, Sandy; Letteri-Walker, Barbara
2014-01-01
School librarians in Newport News, Virginia, are meeting the challenges of integrating an Inquiry Process Model into instruction. In the original model the process began by asking students to develop questions to start their inquiry journey. As this model was taught it was discovered that students often did not have enough background knowledge to…
Parametric models to relate spike train and LFP dynamics with neural information processing.
Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan
2012-01-01
Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.
ERIC Educational Resources Information Center
Scheiter, Katharina; Schubert, Carina; Schüler, Anne
2018-01-01
Background: When learning with text and pictures, learners often fail to adequately process the materials, which can be explained as a failure to self-regulate one's learning by choosing adequate cognitive learning processes. Eye movement modelling examples (EMME) showing how to process multimedia instruction have improved elementary school…
Evaluation of a Theory of Instructional Sequences for Physics Instruction
ERIC Educational Resources Information Center
Wackermann, Rainer; Trendel, Georg; Fischer, Hans E.
2010-01-01
The background of the study is the theory of "basis models of teaching and learning", a comprehensive set of models of learning processes which includes, for example, learning through experience and problem-solving. The combined use of different models of learning processes has not been fully investigated and it is frequently not clear…
NASA Astrophysics Data System (ADS)
Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.
2018-06-01
A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
In search of lonely top quarks at the Fermilab Tevatron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowen, Matthew T.; Ellis, Stephen D.; Strassler, Matthew J.
2005-10-01
Single top-quark production, via weak-interaction processes, is an important test of the standard model, potentially sensitive to new physics. However, it is becoming known that this measurement is much more challenging at the Tevatron than originally expected. We reexamine this process and suggest new methods, using shape variables, that can supplement the methods that have been discussed previously. In particular, by focusing on correlations and asymmetries, we can reduce backgrounds substantially without low acceptance for the signal. Our method also allows for a self-consistency check on the modeling of the backgrounds. However, at the present time, serious systematic problems remain,more » especially concerning the background from W-plus-jets; these must be studied further by experimentalists and theorists alike.« less
The Genealogical Consequences of Fecundity Variance Polymorphism
Taylor, Jesse E.
2009-01-01
The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628
ERIC Educational Resources Information Center
van Nieuwenhuijzen, M.; de Castro, B. O.; van der Valk, I.; Wijnroks, L.; Vermeer, A.; Matthys, W.
2006-01-01
Background: This study aimed to examine whether the social information-processing model (SIP model) applies to aggressive behaviour by children with mild intellectual disabilities (MID). The response-decision element of SIP was expected to be unnecessary to explain aggressive behaviour in these children, and SIP was expected to mediate the…
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
A biological hierarchical model based underwater moving object detection.
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
A Biological Hierarchical Model Based Underwater Moving Object Detection
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194
Real-time understanding of lignocellulosic bioethanol fermentation by Raman spectroscopy
2013-01-01
Background A substantial barrier to commercialization of lignocellulosic ethanol production is a lack of process specific sensors and associated control strategies that are essential for economic viability. Current sensors and analytical techniques require lengthy offline analysis or are easily fouled in situ. Raman spectroscopy has the potential to continuously monitor fermentation reactants and products, maximizing efficiency and allowing for improved process control. Results In this paper we show that glucose and ethanol in a lignocellulosic fermentation can be accurately monitored by a 785 nm Raman spectroscopy instrument and novel immersion probe, even in the presence of an elevated background thought to be caused by lignin-derived compounds. Chemometric techniques were used to reduce the background before generating calibration models for glucose and ethanol concentration. The models show very good correlation between the real-time Raman spectra and the offline HPLC validation. Conclusions Our results show that the changing ethanol and glucose concentrations during lignocellulosic fermentation processes can be monitored in real-time, allowing for optimization and control of large scale bioconversion processes. PMID:23425590
A new level set model for cell image segmentation
NASA Astrophysics Data System (ADS)
Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun
2011-02-01
In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.
Reduction of Background Noise in the NASA Ames 40- by 80-Foot Wind Tunnel
NASA Technical Reports Server (NTRS)
Jaeger, Stephen M.; Allen, Christopher S.; Soderman, Paul T.; Olson, Larry E. (Technical Monitor)
1995-01-01
Background noise in both open-jet and closed wind tunnels adversely affects the signal-to-noise ratio of acoustic measurements. To measure the noise of increasingly quieter aircraft models, the background noise will have to be reduced by physical means or through signal processing. In a closed wind tunnel, such as the NASA Ames 40- by 80- Foot Wind Tunnel, the principle background noise sources can be classified as: (1) fan drive noise; (2) microphone self-noise; (3) aerodynamically induced noise from test-dependent hardware such as model struts and junctions; and (4) noise from the test section walls and vane set. This paper describes the steps taken to minimize the influence of each of these background noise sources in the 40 x 80.
Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J
2018-01-01
Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.
Heterogeneity effects in visual search predicted from the group scanning model.
Macquistan, A D
1994-12-01
The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.
ERIC Educational Resources Information Center
Pettersson, Rune
This paper discusses a mental model of learning based on the processes of attention, perception, processing, and application. The learning process starts with attention, such as curiosity, excitement, expectation, or fear; in pedagogy this is called motivation. New impressions are dependent on and interpreted against the background of previous…
Evidence for the associated production of a W boson and a top quark at ATLAS
NASA Astrophysics Data System (ADS)
Koll, James
This thesis discusses a search for the Standard Model single top Wt-channel process. An analysis has been performed searching for the Wt-channel process using 4.7 fb-1 of integrated luminosity collected with the ATLAS detector at the Large Hadron Collider. A boosted decision tree is trained using machine learning techniques to increase the separation between signal and background. A profile likelihood fit is used to measure the cross-section of the Wt-channel process at sigma(pp → Wt + X) = 16.8 +/-2.9 (stat) +/- 4.9(syst) pb, consistent with the Standard Model prediction. This fit is also used to generate pseudoexperiments to calculate the significance, finding an observed (expected) 3.3 sigma (3.4 sigma) excess over background.
ERIC Educational Resources Information Center
De Corte, Erik; Verschaffel, Lieven
Design and results of an investigation attempting to analyze and improve children's solution processes in elementary addition and subtraction problems are described. As background for the study, a conceptual model was developed based on previous research. One dimension of the model relates to the characteristics of the tasks (numerical versus word…
ERIC Educational Resources Information Center
Tichnor-Wagner, Ariel; Allen, Danielle; Socol, Allison Rose; Cohen-Vogel, Lora; Rutledge, Stacey A.; Xing, Qi W.
2018-01-01
Background/Context: This study examines the implementation of an academic and social-emotional learning innovation called Personalization for Academic and Social-Emotional Learning, or PASL. The innovation was designed, tested, and implemented using a continuous continuous-improvement model. The model emphasized a top-and-bottom process in which…
Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying
ERIC Educational Resources Information Center
Duong, Jeffrey; Bradshaw, Catherine P.
2013-01-01
Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…
NASA Technical Reports Server (NTRS)
Gallagher, Dennis
2018-01-01
Outline - Inner Magnetosphere Effects: Historical Background; Main regions and transport processes: Ionosphere, Plasmasphere, Plasma sheet, Ring current, Radiation belt; Geomagnetic Activity: Storms, Substorm; Models.
Re-Evaluation of Development of the TMDL Using Long-Term Monitoring Data and Modeling
NASA Astrophysics Data System (ADS)
Squires, A.; Rittenburg, R.; Boll, J.; Brooks, E. S.
2012-12-01
Since 1996, 47,979 Total Maximum Daily Loads (TMDLs) have been approved throughout the United States for impaired water bodies. TMDLs are set through the determination of natural background loads for a given water body which then estimate contributions from point and nonpoint sources to create load allocations and determine acceptable pollutant levels to meet water quality standards. Monitoring data and hydrologic models may be used in this process. However, data sets used are often limited in duration and frequency, and model simulations are not always accurate. The objective of this study is to retrospectively look at the development and accuracy of the TMDL for a stream in an agricultural area using long-term monitoring data and a robust modeling process. The study area is the Paradise Creek Watershed in northern Idaho. A sediment TMDL was determined for the Idaho section of Paradise Creek in 1997. Sediment TMDL levels were determined using a short-term data set and the Water Erosion Prediction Project (WEPP) model. Background loads used for the TMDL in 1997 were from pre-agricultural levels, based on WEPP model results. We modified the WEPP model for simulation of saturation excess overland flow, the dominant runoff generation mechanism, and analyzed more than 10 years of high resolution monitoring data from 2001 - 2012, including discharge and total suspended solids. Results will compare background loading and current loading based on present-day land use documented during the monitoring period and compare previous WEPP model results with the modified WEPP model results. This research presents a reevaluation of the TMDL process with recommendations for a more scientifically sound methodology to attain realistic water quality goals.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Aaltonen, T.
2015-03-17
Production of the Υ(1S) meson in association with a vector boson is a rare process in the standard model with a cross section predicted to be below the sensitivity of the Tevatron. Observation of this process could signify contributions not described by the standard model or reveal limitations with the current nonrelativistic quantum-chromodynamic models used to calculate the cross section. We perform a search for this process using the full Run II data set collected by the CDF II detector corresponding to an integrated luminosity of 9.4 fb -1. Our search considers the Υ→μμ decay and the decay of themore » W and Z bosons into muons and electrons. Furthermore, in these purely leptonic decay channels, we observe one ΥW candidate with an expected background of 1.2±0.5 events, and one ΥZcandidate with an expected background of 0.1±0.1 events. Both observations are consistent with the predicted background contributions. The resulting upper limits on the cross section for Υ+W/Zproduction are the most sensitive reported from a single experiment and place restrictions on potential contributions from non-standard-model physics.« less
Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However,...
Chen, Yung-Yue
2018-05-08
Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
ERIC Educational Resources Information Center
Hale, William W., III; Raaijmakers, Quinten A. W.; Muris, Peter; van Hoof, Anne; Meeus, Wim H. J.
2009-01-01
Background: This study investigates whether anxiety and depressive disorder symptoms of adolescents from the general community are best described by a model that assumes they are indicative of one general factor or by a model that assumes they are two distinct disorders with parallel growth processes. Additional analyses were conducted to explore…
Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C
2016-02-01
A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.
Background noise model development for seismic stations of KMA
NASA Astrophysics Data System (ADS)
Jeon, Youngsoo
2010-05-01
The background noise recorded at seismometer is exist at any seismic signal due to the natural phenomena of the medium which the signal passed through. Reducing the seismic noise is very important to improve the data quality in seismic studies. But, the most important aspect of reducing seismic noise is to find the appropriate place before installing the seismometer. For this reason, NIMR(National Institution of Meteorological Researches) starts to develop a model of standard background noise for the broadband seismic stations of the KMA(Korea Meteorological Administration) using a continuous data set obtained from 13 broadband stations during the period of 2007 and 2008. We also developed the model using short period seismic data from 10 stations at the year of 2009. The method of Mcmara and Buland(2004) is applied to analyse background noise of Korean Peninsula. The fact that borehole seismometer records show low noise level at frequency range greater than 1 Hz compared with that of records at the surface indicate that the cultural noise of inland Korean Peninsula should be considered to process the seismic data set. Reducing Double Frequency peak also should be regarded because the Korean Peninsula surrounded by the seas from eastern, western and southern part. The development of KMA background model shows that the Peterson model(1993) is not applicable to fit the background noise signal generated from Korean Peninsula.
Therapeutic Enactment: Integrating Individual and Group Counseling Models for Change
ERIC Educational Resources Information Center
Westwood, Marvin J.; Keats, Patrice A.; Wilensky, Patricia
2003-01-01
The purpose of this article is to introduce the reader to a group-based therapy model known as therapeutic enactment. A description of this multimodal change model is provided by outlining the relevant background information, key concepts related to specific change processes, and the differences in this model compared to earlier psychodrama…
Information Flow in an Atmospheric Model and Data Assimilation
ERIC Educational Resources Information Center
Yoon, Young-noh
2011-01-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background…
The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.
Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M
2005-08-15
The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.
Contexts of Congressional Decision Behavior
1979-01-01
type, issue dimension measured, and data source. This research design al:o employed controls for member backgrounds and constituency characteristics...34 Contextual Theory . . 15 1. The Integration of Contending Models of the Legislative Process ............ 15 2. The Transcendence of "Static" Research Designs ...Issue.Characteristics . . ........ 1713 3. Communications Controlled by Background and Constituency Factors . . . . . . . . . . . . . . .18:3 0. Summary and
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
Data for Environmental Modeling (D4EM): Background and Applications of Data Automation
The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...
A Catalog of Galaxy Clusters Observed by XMM-Newton
NASA Technical Reports Server (NTRS)
Snowden, S. L.; Mushotzky, R. M.; Kuntz, K. D.; Davis, David S.
2007-01-01
Images and the radial profiles of the temperature, abundance, and brightness for 70 clusters of galaxies observed by XMM-Newton are presented along with a detailed discussion of the data reduction and analysis methods, including background modeling, which were used in the processing. Proper consideration of the various background components is vital to extend the reliable determination of cluster parameters to the largest possible cluster radii. The various components of the background including the quiescent particle background, cosmic diffuse emission, soft proton contamination, and solar wind charge exchange emission are discussed along with suggested means of their identification, filtering, and/or their modeling and subtraction. Every component is spectrally variable, sometimes significantly so, and all components except the cosmic background are temporally variable as well. The distributions of the events over the FOV vary between the components, and some distributions vary with energy. The scientific results from observations of low surface brightness objects and the diffuse background itself can be strongly affected by these background components and therefore great care should be taken in their consideration.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Non-stationary background intensity and Caribbean seismic events
NASA Astrophysics Data System (ADS)
Valmy, Larissa; Vaillant, Jean
2014-05-01
We consider seismic risk calculation based on models with non-stationary background intensity. The aim is to improve predictive strategies in the framework of seismic risk assessment from models describing at best the seismic activity in the Caribbean arc. Appropriate statistical methods are required for analyzing the volumes of data collected. The focus is on calculating earthquakes occurrences probability and analyzing spatiotemporal evolution of these probabilities. The main modeling tool is the point process theory in order to take into account past history prior to a given date. Thus, the seismic event conditional intensity is expressed by means of the background intensity and the self exciting component. This intensity can be interpreted as the expected event rate per time and / or surface unit. The most popular intensity model in seismology is the ETAS (Epidemic Type Aftershock Sequence) model introduced and then generalized by Ogata [2, 3]. We extended this model and performed a comparison of different probability density functions for the triggered event times [4]. We illustrate our model by considering the CDSA (Centre de Données Sismiques des Antilles) catalog [1] which contains more than 7000 seismic events occurred in the Lesser Antilles arc. Statistical tools for testing the background intensity stationarity and for dynamical segmentation are presented. [1] Bengoubou-Valérius M., Bazin S., Bertil D., Beauducel F. and Bosson A. (2008). CDSA: a new seismological data center for the French Lesser Antilles, Seismol. Res. Lett., 79 (1), 90-102. [2] Ogata Y. (1998). Space-time point-process models for earthquake occurrences, Annals of the Institute of Statistical Mathematics, 50 (2), 379-402. [3] Ogata, Y. (2011). Significant improvements of the space-time ETAS model for forecasting of accurate baseline seismicity, Earth, Planets and Space, 63 (3), 217-229. [4] Valmy L. and Vaillant J. (2013). Statistical models in seismology: Lesser Antilles arc case, Bull. Soc. géol. France, 2013, 184 (1), 61-67.
Sensitivity analysis for simulating pesticide impacts on honey bee colonies
Background/Question/Methods Regulatory agencies assess risks to honey bees from pesticides through a tiered process that includes predictive modeling with empirical toxicity and chemical data of pesticides as a line of evidence. We evaluate the Varroapop colony model, proposed by...
2010-09-01
external sources ‘L1’ like zodiacal light (or diffuse nebula ) or stray light ‘L2’ and these components change with the telescope pointing. Bk (T,t...Astronomical scene background (zodiacal light, diffuse nebulae , etc.). L2(P A(tk), t): Image background component caused by stray light. MS
Argumentation in Science Education: A Model-Based Framework
ERIC Educational Resources Information Center
Bottcher, Florian; Meisert, Anke
2011-01-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…
A Review of Energy Models with Particular Reference to Employment and Manpower Analysis.
ERIC Educational Resources Information Center
Eckstein, Albert J.; Heien, Dale M.
To analyze the application of quantitative models to energy-employment issues, the energy problem was viewed in three distinct, but related, phases: the post-embargo shock effects, the intermediate-term process of adjustment, and the long-run equilibrium. Against this background eighteen existing energy models (government supported as well as…
The Neutral Islands during the Late Epoch of Reionization
NASA Astrophysics Data System (ADS)
Xu, Yidong; Yue, Bin; Chen, Xuelei
2018-05-01
The large-scale structure of the ionization field during the epoch of reionization (EoR) can be modeled by the excursion set theory. While the growth of ionized regions during the early stage are described by the ``bubble model'', the shrinking process of neutral regions after the percolation of the ionized region calls for an ``island model''. An excursion set based analytical model and a semi-numerical code (islandFAST) have been developed. The ionizing background and the bubbles inside the islands are also included in the treatment. With two kinds of absorbers of ionizing photons, i.e. the large-scale under-dense neutral islands and the small-scale over-dense clumps, the ionizing background are self-consistently evolved in the model.
NASA Astrophysics Data System (ADS)
Baumgartel, Darin C.
Since the formulation of the Standard Model of particle physics, numerous experiments have sought to observe the signatures of the subatomic particles by examining the outcomes of charged particle collisions. Over time, advances in detector technology and scientific computing have allowed for unprecedented precision measurements of Standard Model phenomena and particle properties. Although the Standard Model has displayed remarkable predictive power, extensions to the Standard Model have been formulated to account for unexplained phenomena, and these extensions often infer the existence of additional subatomic particles. Consequently, experiments at particle colliders often endeavor to search for signatures of physics beyond the Standard Model. These searches and measurements are often complementary pursuits, as searches are often limited by the precision of estimations of the Standard Model backgrounds. At the forefront of present-day collider experiments is the Large Hadron Collider at CERN, which delivers proton-proton collisions with unprecedented energy and luminosity. Collisions are recorded with detectors located at interaction points along the ring of the Large Hadron Collider. The CMS detector is one of two general-purpose detectors at the Large Hadron Collider, and the high-precision detection of particles from collision events in the CMS detector make the CMS detector a powerful tool for both Standard-Model measurements and searches for new physics. The Standard Model is characterized by three generation of quarks and leptons. This correspondence between the generations of quarks and leptons is necessary to allow for the renormalizability of the Standard Model, but it is not an inherent property of the Standard Model. Motivated by this compelling symmetry, many theories and models propose the existence of leptoquark bosons which mediate transitions between quarks and leptons. Experimental constraints indicate that leptoquarks would couple to a single generation, and this thesis describes searches for leptoquarks produced in pairs and decaying to final states containing either two muons and two jets, or one muon, one muon-neutrino, and two jets. Searches are conducted with collision data at center-of-mass energies of both 7 TeV and 8 TeV. No compelling evidence for the existence of leptoquarks is found, and upper limits on the leptoquark mass and cross section are placed at the 95% confidence level. These limits are the most stringent to date, and are several times larger than limits placed previously at hadron collider experiments. While the pair production of massive leptoquark bosons yields final states which have strong kinematic differences from the Standard Model processes, the ability to exploit these differences is limited by the ability to accurately model the backgrounds. The most notable of these backgrounds is the production of a W boson in association with one or more jets. Since the W+jets process has a very large cross section and a final state containing missing energy, its contribution to the total Standard Model background is both nominally large and more difficult to discriminate against than backgrounds with only visible final state objects. Furthermore, estimates of this background are not easily improved by comparisons with data in control regions, and simulations of the background are often limited to leading-order predictions. To improve the understanding and modeling of this background for future endeavors, this thesis also presents measurements of the W+jets process differentially as a function of several variables, including the jet multiplicity, the individual jet transverse momenta and pseudorapidities, the angular separation between the jets and the muon, and the scalar sum of the transverse momenta of all jets. The agreement of these measurements with respect to predictions from event leading-order generators and next-to-leading-order calculations is assessed.
Information Processing in Adolescents with Bipolar I Disorder
ERIC Educational Resources Information Center
Whitney, Jane; Joormann, Jutta; Gotlib, Ian H.; Kelley, Ryan G.; Acquaye, Tenah; Howe, Meghan; Chang, Kiki D.; Singh, Manpreet K.
2012-01-01
Background: Cognitive models of bipolar I disorder (BD) may aid in identification of children who are especially vulnerable to chronic mood dysregulation. Information-processing biases related to memory and attention likely play a role in the development and persistence of BD among adolescents; however, these biases have not been extensively…
ERIC Educational Resources Information Center
Phan, Huy P.
2011-01-01
Background: Both achievement goals and study processing strategies theories have been shown to contribute to the prediction of students' academic performance. Existing research studies (Fenollar, Roman, & Cuestas, 2007; Liem, Lau, & Nie, 2008; Simons, Dewitte, & Lens, 2004) amalgamating these two theoretical orientations in different causal models…
Older Teenagers' Explanations of Bullying
ERIC Educational Resources Information Center
Thornberg, Robert; Rosenqvist, Robert; Johansson, Per
2012-01-01
Background: In accordance with the social information processing model, how adolescents attribute cause to a particular social situation (e.g., bullying) they witness or participate in, influences their online social information processing, and hence, how they will act in the situation. Objective: The aim of the present study was to explore how…
A Cognitive Ecological Model of Women’s Response to Male Sexual Coercion in Dating
Nurius, Paula S.; Norris, Jeanette
2015-01-01
SUMMARY We offer a theoretical model that consolidates background, environmental, and intrapersonal variables related to women’s experience of sexual coercion in dating into a coherent ecological framework and present for the first time a cognitive analysis of the processes women use to formulate responses to sexual coercion. An underlying premise for this model is that a woman’s coping response to sexual coercion by an acquaintance is mediated through cognitive processing of background and situational influences. Because women encounter this form of sexual coercion in the context of relationships and situations that they presume will follow normative expectations (e.g., about making friends, socializing and dating), it is essential to consider normative processes of learning, cognitive mediation, and coping guiding their efforts to interpret and respond to this form of personal threat. Although acts of coercion unquestionably remain the responsibility of the perpetrator, a more complete understanding of the multilevel factors shaping women’s perception of and response to threats can strengthen future inquiry and prevention efforts. PMID:25729157
Twelve Middle-School Teachers' Planning.
ERIC Educational Resources Information Center
Brown, Deborah Sardo
1988-01-01
Case studies described 12 middle-school teachers' instructional yearly, unit, weekly, and daily planning on the basis of a background questionnaire, interview protocols, an analysis of written plans, think-aloud typescripts, and a questionnaire. A process model best characterized teachers long-term planning, while an agenda-formulation model fit…
A scene model of exosolar systems for use in planetary detection and characterisation simulations
NASA Astrophysics Data System (ADS)
Belu, A.; Thiébaut, E.; Ollivier, M.; Lagache, G.; Selsis, F.; Vakili, F.
2007-12-01
Context: Instrumental projects that will improve the direct optical finding and characterisation of exoplanets have advanced sufficiently to trigger organized investigation and development of corresponding signal processing algorithms. The first step is the availability of field-of-view (FOV) models. These can then be submitted to various instrumental models, which in turn produce simulated data, enabling the testing of processing algorithms. Aims: We aim to set the specifications of a physical model for typical FOVs of these instruments. Methods: The dynamic in resolution and flux between the various sources present in such a FOV imposes a multiscale, independent layer approach. From review of current literature and through extrapolations from currently available data and models, we derive the features of each source-type in the field of view likely to pass the instrumental filter at exo-Earth level. Results: Stellar limb darkening is shown to cause bias in leakage calibration if unaccounted for. Occurrence of perturbing background stars or galaxies in the typical FOV is unlikely. We extract galactic interstellar medium background emissions for current target lists. Galactic background can be considered uniform over the FOV, and it should show no significant drift with parallax. Our model specifications have been embedded into a Java simulator, soon to be made open-source. We have also designed an associated FITS input/output format standard that we present here. Work supported in part by the ESA/ESTEC contract 18701/04/NL/HB, led by Thales Alenia Space.
An efficient background modeling approach based on vehicle detection
NASA Astrophysics Data System (ADS)
Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua
2015-10-01
The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.
The cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, Joseph
1992-01-01
A review the implications of the spectrum and anisotropy of the cosmic microwave background for cosmology. Thermalization and processes generating spectral distortions are discussed. Anisotropy predictions are described and compared with observational constraints. If the evidence for large-scale power in the galaxy distribution in excess of that predicted by the cold dark matter model is vindicated, and the observed structure originated via gravitational instabilities of primordial density fluctuations, the predicted amplitude of microwave background anisotropies on angular scales of a degree and larger must be at least several parts in 10 exp 6.
ERIC Educational Resources Information Center
Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan
2016-01-01
Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…
NASA Technical Reports Server (NTRS)
Hurley, K.; Anderson, K. A.
1972-01-01
Models of Jupiter's magnetosphere were examined to predict the X-ray flux that would be emitted in auroral or radiation zone processes. Various types of X-ray detection were investigated for energy resolution, efficiency, reliability, and background. From the model fluxes it was determined under what models Jovian X-rays could be detected.
The treatment of uncertainties in reactive pollution dispersion models at urban scales.
Tomlin, A S; Ziehn, T; Goodman, P; Tate, J E; Dixon, N S
2016-07-18
The ability to predict NO2 concentrations ([NO2]) within urban street networks is important for the evaluation of strategies to reduce exposure to NO2. However, models aiming to make such predictions involve the coupling of several complex processes: traffic emissions under different levels of congestion; dispersion via turbulent mixing; chemical processes of relevance at the street-scale. Parameterisations of these processes are challenging to quantify with precision. Predictions are therefore subject to uncertainties which should be taken into account when using models within decision making. This paper presents an analysis of mean [NO2] predictions from such a complex modelling system applied to a street canyon within the city of York, UK including the treatment of model uncertainties and their causes. The model system consists of a micro-scale traffic simulation and emissions model, and a Reynolds averaged turbulent flow model coupled to a reactive Lagrangian particle dispersion model. The analysis focuses on the sensitivity of predicted in-street increments of [NO2] at different locations in the street to uncertainties in the model inputs. These include physical characteristics such as background wind direction, temperature and background ozone concentrations; traffic parameters such as overall demand and primary NO2 fraction; as well as model parameterisations such as roughness lengths, turbulent time- and length-scales and chemical reaction rate coefficients. Predicted [NO2] is shown to be relatively robust with respect to model parameterisations, although there are significant sensitivities to the activation energy for the reaction NO + O3 as well as the canyon wall roughness length. Under off-peak traffic conditions, demand is the key traffic parameter. Under peak conditions where the network saturates, road-side [NO2] is relatively insensitive to changes in demand and more sensitive to the primary NO2 fraction. The most important physical parameter was found to be the background wind direction. The study highlights the key parameters required for reliable [NO2] estimations suggesting that accurate reference measurements for wind direction should be a critical part of air quality assessments for in-street locations. It also highlights the importance of street scale chemical processes in forming road-side [NO2], particularly for regions of high NOx emissions such as close to traffic queues.
Heavy particle transport in sputtering systems
NASA Astrophysics Data System (ADS)
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
NASA Astrophysics Data System (ADS)
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
English Teachers' Language Awareness: Away with the Monolingual Bias?
ERIC Educational Resources Information Center
Otwinowska, Agnieszka
2017-01-01
The training of language teachers still follows traditional models of teachers' competences and awareness, focusing solely on the target language. Such models are incompatible with multilingual pedagogy, whereby languages are not taught in isolation, and learners' background languages are activated to enhance the process. When teaching…
Conceptual Processes for Linking Eutrophication and Network Models
2006-08-01
recommends a general procedure for future endeavors in this area. BACKGROUND: In recent years new ideas for nutrient management to control...network model. Coupling these two models will provide managers a new perspective on how to improve management strategies and help answer questions such...Dorothy H. Tillman, Dr. Carl F. Cerco, and Mr. Mark R. Noel of the Water Quality and Contaminant Modeling Branch, Enviromental Laboratory (EL
Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-01-01
Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used. PMID:27471104
Studying the Brain in a Dish: 3D Cell Culture Models of Human Brain Development and Disease.
Brown, Juliana; Quadrato, Giorgia; Arlotta, Paola
2018-01-01
The study of the cellular and molecular processes of the developing human brain has been hindered by access to suitable models of living human brain tissue. Recently developed 3D cell culture models offer the promise of studying fundamental brain processes in the context of human genetic background and species-specific developmental mechanisms. Here, we review the current state of 3D human brain organoid models and consider their potential to enable investigation of complex aspects of human brain development and the underpinning of human neurological disease. © 2018 Elsevier Inc. All rights reserved.
Monitoring and Depth of Strategy Use in Computer-Based Learning Environments for Science and History
ERIC Educational Resources Information Center
Deekens, Victor M.; Greene, Jeffrey A.; Lobczowski, Nikki G.
2018-01-01
Background: Self-regulated learning (SRL) models position metacognitive monitoring as central to SRL processing and predictive of student learning outcomes (Winne & Hadwin, 2008; Zimmerman, 2000). A body of research evidence also indicates that depth of strategy use, ranging from surface to deep processing, is predictive of learning…
Self-Exciting Point Process Models of Civilian Deaths in Iraq
2010-01-01
Tita , 2009), we propose that violence in Iraq arises from a combination of exogenous and en- dogenous effects. Spatial heterogeneity in background...Schoenberg, and Tita (2010) where they analyze burgarly and robbery data in Los Angeles. Related work has also been done 2 in Short et al. (2009) where...Control , 4 , 215–240. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., & Tita , G. E. (2010). Self- exciting point process modeling of
ERIC Educational Resources Information Center
Krubu, Dorcas Ejemeh; Zinn, Sandy; Hart, Genevieve
2017-01-01
Aim/Purpose: The research work investigated the information seeking process of undergraduates in a specialised university in Nigeria, in the course of a group assignment. Background: Kuhlthau's Information Search Process (ISP) model is used as lens to reveal how students interact with information in the affective, cognitive and physical realms.…
ERIC Educational Resources Information Center
Barkoukis, Vassilis; Hagger, Martin S.; Lambropoulos, George; Tsorbatzoudis, Haralambos
2010-01-01
Background: The trans-contextual model (TCM) is an integrated model of motivation that aims to explain the processes by which agentic support for autonomous motivation in physical education promotes autonomous motivation and physical activity in a leisure-time context. It is proposed that perceived support for autonomous motivation in physical…
ERIC Educational Resources Information Center
Perla, Rocco J.; Carifio, James
2011-01-01
Background: Extending Merton's (1936) work on the consequences of purposive social action, the model, theory and taxonomy outlined here incorporates and formalizes both anticipated and unanticipated research findings in a unified theoretical framework. The model of anticipated research findings was developed initially by Carifio (1975, 1977) and…
ERIC Educational Resources Information Center
Zhang, Xinghui; Xuan, Xin; Chen, Fumei; Zhang, Cai; Luo, Yuhan; Wang, Yun
2016-01-01
Background: Perceptions of school safety have an important effect on students' development. Based on the model of "context-process-outcomes," we examined school safety as a context variable to explore how school safety at the school level affected students' self-esteem. Methods: We used hierarchical linear modeling to examine the link…
Discrimination of dynamical system models for biological and chemical processes.
Lorenz, Sönke; Diederichs, Elmar; Telgmann, Regina; Schütte, Christof
2007-06-01
In technical chemistry, systems biology and biotechnology, the construction of predictive models has become an essential step in process design and product optimization. Accurate modelling of the reactions requires detailed knowledge about the processes involved. However, when concerned with the development of new products and production techniques for example, this knowledge often is not available due to the lack of experimental data. Thus, when one has to work with a selection of proposed models, the main tasks of early development is to discriminate these models. In this article, a new statistical approach to model discrimination is described that ranks models wrt. the probability with which they reproduce the given data. The article introduces the new approach, discusses its statistical background, presents numerical techniques for its implementation and illustrates the application to examples from biokinetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Stanley T.
2007-01-01
This thesis describes the first search for Standard Model Higgs boson production in association with a top-antitop quark pair in proton-antiproton collisions at a centre of mass energy of 1.96 TeV. The integrated luminosity for othis search corresponds to 319 pb -1 of data recorded by the Collider Detector at Fermilab. We outline the even selection criteria, evaluate the even acceptance and estimate backgrounds from Standard Model sources. These events are observed that satisfy our event selection, while 2.16 ± 0.66 events are expected from background processes. no significant excess of events above background is thus observed, and we set 95% confidence level upper limits on the production cross section for this process as a function of the Higgs mass. For a Higgs boson mass of 115 GeV/c 2 we find that σ more » $$t\\bar{t}H$$ x BR (H → bb) < 690 fb at 95% C.L. These are the first limits set for $$t\\bar{t}H$$ production. This search also allows us to anticipate the challenges and necessary strategies needed for future searches of $$t\\bar{t}H$$ production.« less
Family Background, Self-Confidence and Economic Outcomes
ERIC Educational Resources Information Center
Filippin, Antonio; Paccagnella, Marco
2012-01-01
In this paper we analyze the role played by self-confidence, modeled as beliefs about one's ability, in shaping task choices. We propose a model in which fully rational agents exploit all the available information to update their beliefs using Bayes' rule, eventually learning their true type. We show that when the learning process does not…
ERIC Educational Resources Information Center
Meeus, Wil; Van Petegem, Peter; Meijer, Joost
2008-01-01
Background: The predominant dissertation model used in teacher education courses in Flanders is the "literature study with practical processing". Despite the practical supplement, this traditional model does not fit sufficiently well with autonomous learning as the objective of modern teacher education dissertations. This study reports on the…
ERIC Educational Resources Information Center
Hunt, Pete; Barrios, Lisa; Telljohann, Susan K.; Mazyck, Donna
2015-01-01
Background: The Whole School, Whole Community, Whole Child (WSCC) model shows the interrelationship between health and learning and the potential for improving educational outcomes by improving health outcomes. However, current descriptions do not explain how to implement the model. Methods: The existing literature, including scientific articles,…
The use of mathematical models in teaching wastewater treatment engineering.
Morgenroth, E; Arvin, E; Vanrolleghem, P
2002-01-01
Mathematical modeling of wastewater treatment processes has become increasingly popular in recent years. To prepare students for their future careers, environmental engineering education should provide students with sufficient background and experiences to understand and apply mathematical models efficiently and responsibly. Approaches for introducing mathematical modeling into courses on wastewater treatment engineering are discussed depending on the learning objectives, level of the course and the time available.
NASA Technical Reports Server (NTRS)
O'Connor, Brian; Hernandez, Deborah; Hornsby, Linda; Brown, Maria; Horton-Mullins, Kathryn
2017-01-01
Outline: Background of ISS (International Space Station) Material Science Research Rack; NASA SCA (Sample Cartridge Assembly) Design; GEDS (Gravitational Effects in Distortion in Sintering) Experiment Ampoule Design; Development Testing Summary; Thermal Modeling and Analysis. Summary: GEDS design development challenging (GEDS Ampoule design developed through MUGS (Microgravity) testing; Short duration transient sample processing; Unable to measure sample temperatures); MUGS Development testing used to gather data (Actual LGF (Low Gradient Furnace)-like furnace response; Provided sample for sintering evaluation); Transient thermal model integral to successful GEDS experiment (Development testing provided furnace response; PI (Performance Indicator) evaluation of sintering anchored model evaluation of processing durations; Thermal transient model used to determine flight SCA sample processing profiles).
Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice
2017-01-01
The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.
Radon induced background processes in the KATRIN pre-spectrometer
NASA Astrophysics Data System (ADS)
Fränkle, F. M.; Bornschein, L.; Drexlin, G.; Glück, F.; Görhardt, S.; Käfer, W.; Mertens, S.; Wandkowsky, N.; Wolf, J.
2011-10-01
The KArlsruhe TRItium Neutrino (KATRIN) experiment is a next generation, model independent, large scale tritium β-decay experiment to determine the effective electron anti-neutrino mass by investigating the kinematics of tritium β-decay with a sensitivity of 200 meV/c 2 using the MAC-E filter technique. In order to reach this sensitivity, a low background level of 10 -2 counts per second (cps) is required. This paper describes how the decay of radon in a MAC-E filter generates background events, based on measurements performed at the KATRIN pre-spectrometer test setup. Radon (Rn) atoms, which emanate from materials inside the vacuum region of the KATRIN spectrometers, are able to penetrate deep into the magnetic flux tube so that the α-decay of Rn contributes to the background. Of particular importance are electrons emitted in processes accompanying the Rn α-decay, such as shake-off, internal conversion of excited levels in the Rn daughter atoms and Auger electrons. While low-energy electrons (<100 eV) directly contribute to the background in the signal region, higher energy electrons can be stored magnetically inside the volume of the spectrometer. Depending on their initial energy, they are able to create thousands of secondary electrons via subsequent ionization processes with residual gas molecules and, since the detector is not able to distinguish these secondary electrons from the signal electrons, an increased background rate over an extended period of time is generated.
Locating the Coaching Process in Practice: Models "for" and "of" Coaching
ERIC Educational Resources Information Center
Cushion, Christopher J.; Armour, Kathleen M.; Jones, Robyn L.
2006-01-01
Background: Despite an increasing recognition of the existence of a process of coaching, and a resulting increase in research activity, there remains a lack of a clear conceptual base for sports coaching. This situation has left coaching without a clear set of concepts and principles that reflect coaching practice. Purpose: The aim of this paper…
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Studies of visual detection of a signal superimposed on one of two identical backgrounds show performance degradation when the background has high contrast and is similar in spatial frequency and/or orientation to the signal. To account for this finding, models include a contrast gain control mechanism that pools activity across spatial frequency, orientation and space to inhibit (divisively) the response of the receptor sensitive to the signal. In tasks in which the observer has to detect a known signal added to one of M different backgrounds grounds due to added visual noise, the main sources of degradation are the stochastic noise in the image and the suboptimal visual processing. We investigate how these two sources of degradation (contrast gain control and variations in the background) interact in a task in which the signal is embedded in one of M locations in a complex spatially varying background (structured background). We use backgrounds extracted from patient digital medical images. To isolate effects of the fixed deterministic background (the contrast gain control) from the effects of the background variations, we conduct detection experiments with three different background conditions: (1) uniform background, (2) a repeated sample of structured background, and (3) different samples of structured background. Results show that human visual detection degrades from the uniform background condition to the repeated background condition and degrades even further in the different backgrounds condition. These results suggest that both the contrast gain control mechanism and the background random variations degrade human performance in detection of a signal in a complex, spatially varying background. A filter model and added white noise are used to generate estimates of sampling efficiencies, an equivalent internal noise, an equivalent contrast-gain-control-induced noise, and an equivalent noise due to the variations in the structured background.
Using system dynamics for collaborative design: a case study
Elf, Marie; Putilova, Mariya; von Koch, Lena; Öhrn, Kerstin
2007-01-01
Background In order to facilitate the collaborative design, system dynamics (SD) with a group modelling approach was used in the early stages of planning a new stroke unit. During six workshops a SD model was created in a multiprofessional group. Aim To explore to which extent and how the use of system dynamics contributed to the collaborative design process. Method A case study was conducted using several data sources. Results SD supported a collaborative design, by facilitating an explicit description of stroke care process, a dialogue and a joint understanding. The construction of the model obliged the group to conceptualise the stroke care and experimentation with the model gave the opportunity to reflect on care. Conclusion SD facilitated the collaborative design process and should be integrated in the early stages of the design process as a quality improvement tool. PMID:17683519
Instability in interacting dark sector: an appropriate holographic Ricci dark energy model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrera, Ramón; Hipólito-Ricaldi, W.S.; Videla, Nelson, E-mail: ramon.herrera@pucv.cl, E-mail: wiliam.ricaldi@ufes.br, E-mail: nelson.videla@ing.uchile.cl
In this paper we investigate the consequences of phantom crossing considering the perturbative dynamics in models with interaction in their dark sector. By mean of a general study of gauge-invariant variables in comoving gauge, we relate the sources of instabilities in the structure formation process with the phantom crossing. In order to illustrate these relations and its consequences in more detail, we consider a specific case of an holographic dark energy interacting with dark matter. We find that in spite of the model is in excellent agreement with observational data at background level, however it is plagued of instabilities inmore » its perturbative dynamics. We reconstruct the model in order to avoid these undesirable instabilities, and we show that this implies a modification of the concordance model at background. Also we find drastic changes on the parameters space in our model when instabilities are avoided.« less
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
Genetic background effects in quantitative genetics: gene-by-system interactions.
Sardi, Maria; Gasch, Audrey P
2018-04-11
Proper cell function depends on networks of proteins that interact physically and functionally to carry out physiological processes. Thus, it seems logical that the impact of sequence variation in one protein could be significantly influenced by genetic variants at other loci in a genome. Nonetheless, the importance of such genetic interactions, known as epistasis, in explaining phenotypic variation remains a matter of debate in genetics. Recent work from our lab revealed that genes implicated from an association study of toxin tolerance in Saccharomyces cerevisiae show extensive interactions with the genetic background: most implicated genes, regardless of allele, are important for toxin tolerance in only one of two tested strains. The prevalence of background effects in our study adds to other reports of widespread genetic-background interactions in model organisms. We suggest that these effects represent many-way interactions with myriad features of the cellular system that vary across classes of individuals. Such gene-by-system interactions may influence diverse traits and require new modeling approaches to accurately represent genotype-phenotype relationships across individuals.
Strong field QED in lepton colliders and electron/laser interactions
NASA Astrophysics Data System (ADS)
Hartin, Anthony
2018-05-01
The studies of strong field particle physics processes in electron/laser interactions and lepton collider interaction points (IPs) are reviewed. These processes are defined by the high intensity of the electromagnetic fields involved and the need to take them into account as fully as possible. Thus, the main theoretical framework considered is the Furry interaction picture within intense field quantum field theory. In this framework, the influence of a background electromagnetic field in the Lagrangian is calculated nonperturbatively, involving exact solutions for quantized charged particles in the background field. These “dressed” particles go on to interact perturbatively with other particles, enabling the background field to play both macroscopic and microscopic roles. Macroscopically, the background field starts to polarize the vacuum, in effect rendering it a dispersive medium. Particles encountering this dispersive vacuum obtain a lifetime, either radiating or decaying into pair particles at a rate dependent on the intensity of the background field. In fact, the intensity of the background field enters into the coupling constant of the strong field quantum electrodynamic Lagrangian, influencing all particle processes. A number of new phenomena occur. Particles gain an intensity-dependent rest mass shift that accounts for their presence in the dispersive vacuum. Multi-photon events involving more than one external field photon occur at each vertex. Higher order processes which exchange a virtual strong field particle resonate via the lifetimes of the unstable strong field states. Two main arenas of strong field physics are reviewed; those occurring in relativistic electron interactions with intense laser beams, and those occurring in the beam-beam physics at the interaction point of colliders. This review outlines the theory, describes its significant novel phenomenology and details the experimental schema required to detect strong field effects and the simulation programs required to model them.
Inferring the background traffic arrival process in the Internet.
Hága, Péter; Csabai, István; Vattay, Gábor
2009-12-01
Phase transition has been found in many complex interactivity systems. Complex networks are not exception either but there are quite few real systems where we can directly understand the emergence of this nontrivial behavior from the microscopic view. In this paper, we present the emergence of the phase transition between the congested and uncongested phases of a network link. We demonstrate a method to infer the background traffic arrival process, which is one of the key state parameters of the Internet traffic. The traffic arrival process in the Internet has been investigated in several studies, since the recognition of its self-similar nature. The statistical properties of the traffic arrival process are very important since they are fundamental in modeling the dynamical behavior. Here, we demonstrate how the widely used packet train technique can be used to determine the main properties of the traffic arrival process. We show that the packet train dispersion is sensitive to the congestion on the network path. We introduce the packet train stretch as an order parameter to describe the phase transition between the congested and uncongested phases of the bottleneck link in the path. We find that the distribution of the background traffic arrival process can be determined from the average packet train dispersion at the critical point of the system.
ERIC Educational Resources Information Center
Resinger, Paul
2008-01-01
After providing some insight into the historical background of school evaluation in Austria, this report introduces a possible reform model, and then describes the development processes, drawing on the example of a two-year pilot project, before evaluating its advantages and limitations.
Eglin virtual range database for hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth
1998-07-01
Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.
Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Óscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín
2008-01-01
Background Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. Methods With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. Results The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. Conclusion The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals. PMID:18673511
ERIC Educational Resources Information Center
Alias, Norlidah; Siraj, Saedah; Daud, Mohd Khairul Azman Md; Hussin, Zaharah
2013-01-01
The study examines the effectiveness of Facebook based learning to enhance creativity among Islamic Studies students in the secondary educational setting in Malaysia. It describes the design process by employing the Isman Instructional Design Model. A quantitative study was carried out using experimental method and background survey. The…
ERIC Educational Resources Information Center
Hadwin, Allyson; Oshige, Mika
2011-01-01
Background/Context: Models of self-regulated learning (SRL) have increasingly acknowledged aspects of social context influence in its process; however, great diversity exists in the theoretical positioning of "social" in these models. Purpose/Objective/Research Question/Focus of Study: The purpose of this review article is to introduce and…
ERIC Educational Resources Information Center
Stevens, William E.
This report presents a model for conducting a statewide conference for the approximately 900 members of the South Carolina Council of Teachers of Mathematics (SCCTM) using the AppleWorks integrated software as the basis of the implementation plan. The first and second chapters provide background information on the conference and the…
ERIC Educational Resources Information Center
Waight, Noemi; Liu, Xiufeng; Gregorius, Roberto Ma.
2015-01-01
This paper examined the nuances of the background process of design and development and follow up classroom implementation of computer-based models for high school chemistry. More specifically, the study examined the knowledge contributions of an interdisciplinary team of experts; points of tensions, negotiations and non-negotiable aspects of…
Downward transport of ozone (O3) from the stratosphere can be a significant contributor to tropospheric O3 background levels. However, this process often is not well represented in current regional models. In this study, we develop a seasonally and spatially varying potential vor...
Paris, Alan; Atia, George K; Vosoughi, Azadeh; Berman, Stephen A
2017-08-01
A characteristic of neurological signal processing is high levels of noise from subcellular ion channels up to whole-brain processes. In this paper, we propose a new model of electroencephalogram (EEG) background periodograms, based on a family of functions which we call generalized van der Ziel-McWhorter (GVZM) power spectral densities (PSDs). To the best of our knowledge, the GVZM PSD function is the only EEG noise model that has relatively few parameters, matches recorded EEG PSD's with high accuracy from 0 to over 30 Hz, and has approximately 1/f θ behavior in the midfrequencies without infinities. We validate this model using three approaches. First, we show how GVZM PSDs can arise in a population of ion channels at maximum entropy equilibrium. Second, we present a class of mixed autoregressive models, which simulate brain background noise and whose periodograms are asymptotic to the GVZM PSD. Third, we present two real-time estimation algorithms for steady-state visual evoked potential (SSVEP) frequencies, and analyze their performance statistically. In pairwise comparisons, the GVZM-based algorithms showed statistically significant accuracy improvement over two well-known and widely used SSVEP estimators. The GVZM noise model can be a useful and reliable technique for EEG signal processing. Understanding EEG noise is essential for EEG-based neurology and applications such as real-time brain-computer interfaces, which must make accurate control decisions from very short data epochs. The GVZM approach represents a successful new paradigm for understanding and managing this neurological noise.
NASA Astrophysics Data System (ADS)
Stallone, A.; Marzocchi, W.
2017-12-01
Earthquake occurrence may be approximated by a multidimensional Poisson clustering process, where each point of the Poisson process is replaced by a cluster of points, the latter corresponding to the well-known aftershock sequence (triggered events). Earthquake clusters and their parents are assumed to occur according to a Poisson process at a constant temporal rate proportional to the tectonic strain rate, while events within a cluster are modeled as generations of dependent events reproduced by a branching process. Although the occurrence of such space-time clusters is a general feature in different tectonic settings, seismic sequences seem to have marked differences from region to region: one example, among many others, is that seismic sequences of moderate magnitude in Italian Apennines seem to last longer than similar seismic sequences in California. In this work we investigate on the existence of possible differences in the earthquake clustering process in these two areas. At first, we separate the triggered and background components of seismicity in the Italian and Southern California seismic catalog. Then we study the space-time domain of the triggered earthquakes with the aim to identify possible variations in the triggering properties across the two regions. In the second part of the work we focus our attention on the characteristics of the background seismicity in both seismic catalogs. The assumption of time stationarity of the background seismicity (which includes both cluster parents and isolated events) is still under debate. Some authors suggest that the independent component of seismicity could undergo transient perturbations at various time scales due to different physical mechanisms, such as, for example, viscoelastic relaxation, presence of fluids, non-stationary plate motion, etc, whose impact may depend on the tectonic setting. Here we test if the background seismicity in the two regions can be satisfactorily described by the time-homogeneous Poisson process, and, in case, we characterize quantitatively possible discrepancies with this reference process, and the differences between the two regions.
Degradation of indoor limonene by outdoor ozone: A cascade of secondary organic aerosols.
Rösch, Carolin; Wissenbach, Dirk K; Franck, Ulrich; Wendisch, Manfred; Schlink, Uwe
2017-07-01
In indoor air, terpene-ozone reactions can form secondary organic aerosols (SOA) in a transient process. 'Real world' measurements conducted in a furnished room without air conditioning were modelled involving the indoor background of airborne particulate matter, outdoor ozone infiltrated by natural ventilation, repeated transient limonene evaporations, and different subsequent ventilation regimes. For the given setup, we disentangled the development of nucleated, coagulated, and condensed SOA fractions in the indoor air and calculated the time dependence of the aerosol mass fraction (AMF) by means of a process model. The AMF varied significantly between 0.3 and 5.0 and was influenced by the ozone limonene ratio and the background particles which existed prior to SOA formation. Both influencing factors determine whether nucleation or adsorption processes are preferred; condensation is strongly intensified by particulate background. The results provide evidence that SOA levels in natural indoor environments can surpass those known from chamber measurements. An indicator for the SOA forming potential of limonene was found to be limona ketone. Multiplying its concentration (in μg/m 3 ) by 450(±100) provides an estimate of the concentration of the reacted limonene. This can be used to detect a high particle formation potential due to limonene pollution, e.g. in epidemiological studies considering adverse health effects of indoor air pollutants. Copyright © 2017 Elsevier Ltd. All rights reserved.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Surface alpha backgrounds from plate-out of radon progeny
NASA Astrophysics Data System (ADS)
Perumpilly, Gopakumar; Guiseppe, Vincente
2012-03-01
Low-background detectors operating underground aim for unprecedented low levels of radioactive backgrounds. Although the radioactive decays of airborne radon (particularly Rn-222) and its subsequent daughters present in an experiment are potential backgrounds, more troublesome is the deposition of radon daughters on detector materials. Exposure to radon at any stage of assembly of an experiment can result in surface contamination by daughters supported by the long half life (22 y) of Pb-210 on sensitive locations of a detector. We have developed a model of the radon progeny implantation using Geant4 simulations based on the low energy nuclear recoil process. We explore the alpha decays from implanted progeny on a Ge crystal as potential backgrounds for a neutrinoless double-beta decay experiment. Results of the simulations validated with alpha spectrum measurement of plate-out samples will be presented.
Lipoprotein metabolism indicators improve cardiovascular risk prediction
USDA-ARS?s Scientific Manuscript database
Background: Cardiovascular disease risk increases when lipoprotein metabolism is dysfunctional. We have developed a computational model able to derive indicators of lipoprotein production, lipolysis, and uptake processes from a single lipoprotein profile measurement. This is the first study to inves...
ForCent model development and testing using the Enriched Background Isotope Study experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, W.J.; Hanson, P. J.; Swanston, C.
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parton, William; Hanson, Paul J; Swanston, Chris
The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less
ERIC Educational Resources Information Center
Pflieger, Jacqueline C.; Vazsonyi, Alexander T.
2006-01-01
The current investigation tested a model in which low self-esteem mediated the effects by parenting processes (monitoring, closeness, and support) on measures of dating violence (victimization, perpetration, attitudes, and perceptions) in a sample of adolescents (n=809; mean age=16.4 years) from both low- and high-socioeconomic (SES) backgrounds.…
ERIC Educational Resources Information Center
Bevans, Katherine B.; Fitzpatrick, Leslie-Anne; Sanchez, Betty M.; Riley, Anne W.; Forrest, Christopher
2010-01-01
Background: This study was conducted to empirically evaluate specific human, curricular, and material resources that maximize student opportunities for physical activity during physical education (PE) class time. A structure-process-outcome model was proposed to identify the resources that influence the frequency of PE and intensity of physical…
ERIC Educational Resources Information Center
Bogard, Treavor; Liu, Min; Chiang, Yueh-hui Vanessa
2013-01-01
This multiple-case study examined how advanced learners solved a complex problem, focusing on how their frequency and application of cognitive processes contributed to differences in performance outcomes, and developing a mental model of a problem. Fifteen graduate students with backgrounds related to the problem context participated in the study.…
The Mouse Tumor Biology Database: A Comprehensive Resource for Mouse Models of Human Cancer.
Krupke, Debra M; Begley, Dale A; Sundberg, John P; Richardson, Joel E; Neuhauser, Steven B; Bult, Carol J
2017-11-01
Research using laboratory mice has led to fundamental insights into the molecular genetic processes that govern cancer initiation, progression, and treatment response. Although thousands of scientific articles have been published about mouse models of human cancer, collating information and data for a specific model is hampered by the fact that many authors do not adhere to existing annotation standards when describing models. The interpretation of experimental results in mouse models can also be confounded when researchers do not factor in the effect of genetic background on tumor biology. The Mouse Tumor Biology (MTB) database is an expertly curated, comprehensive compendium of mouse models of human cancer. Through the enforcement of nomenclature and related annotation standards, MTB supports aggregation of data about a cancer model from diverse sources and assessment of how genetic background of a mouse strain influences the biological properties of a specific tumor type and model utility. Cancer Res; 77(21); e67-70. ©2017 AACR . ©2017 American Association for Cancer Research.
Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov
2014-01-01
Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...
Martin, Andrew J; Collie, Rebecca J; Mok, Magdalena M C; McInerney, Dennis M
2016-03-01
Prior cross-cultural research with students in different national contexts (Australia and China) has shown consistency in the extent to which individual personal best (PB) goals are associated with engagement at school. This study extends this work to a multicultural context, assessing perceived PB goal structure in school and individual PB goals among Chinese- and English-speaking background Australian high school students attending the same schools. A sample of 450 students (N = 225 Chinese-speaking background Australian students; N = 225 matched English-speaking background Australian students) from 20 schools. We conducted multigroup path modelling to examine the following process model: Perceived PB goal structure in school → individual PB goals → school engagement → academic achievement. Findings showed that for both groups, perceived PB goal structure in school is associated with an individual's PB goals (and engagement), individual PB goals are associated with engagement, and engagement is associated with achievement. The indirect effects of perceived PB goal structure in school to achievement (via individual PB goals and engagement) and individual PB goals to achievement (via engagement) were also significant. Notably, there was no significant difference in parameters between Chinese- and English-speaking background students, suggesting generality of the effects of perceived PB goal structure in school and individual PB goals in the engagement and achievement process. Findings hold implications for educators teaching to culturally diverse classrooms and seeking to optimize students' academic growth within these contexts. © 2015 The British Psychological Society.
A standard satellite control reference model
NASA Technical Reports Server (NTRS)
Golden, Constance
1994-01-01
This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, Marcela; Liu, Zhen
Heavy scalar and pseudoscalar resonance searches through themore » $$gg\\rightarrow S\\rightarrow t\\bar t$$ process are challenging due to the peculiar behavior of the large interference effects with the standard model $$t\\bar t$$ background. Such effects generate non-trivial lineshapes from additional relative phases between the signal and background amplitudes. We provide the analytic expressions for the differential cross sections to understand the interference effects in the heavy scalar signal lineshapes. We extend our study to the case of CP-violation and further consider the effect of bottom quarks in the production and decay processes. We also evaluate the contributions from additional particles to the gluon fusion production process, such as stops and vector-like quarks, that could lead to significant changes in the behavior of the signal lineshapes. Taking into account the large interference effects, we perform lineshape searches at the LHC and discuss the importance of the systematic uncertainties and smearing effects. Lastly, we present projected sensitivities for two LHC performance scenarios to probe the $$gg\\rightarrow S \\rightarrow t\\bar t$$ channel in various models.« less
NASA Astrophysics Data System (ADS)
Casadei, D.
2014-10-01
The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
Solving a Higgs optimization problem with quantum annealing for machine learning.
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-18
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
Solving a Higgs optimization problem with quantum annealing for machine learning
NASA Astrophysics Data System (ADS)
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-01
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
Reconstruction of dynamical systems from resampled point processes produced by neuron models
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Pavlov, Alexey N.
2018-04-01
Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.
Maintaining the Balance Between Manpower, Skill Levels, and PERSTEMPO
2006-01-01
requirement processes. Models and tools that integrate these dimensions would help crys- tallize issues, identify embedded assumptions , and surface...problems will change if the planning assumptions are incorrect or if the other systems are incapable of making the nec- essary adjustments. Static...Carrillo, Background and Theory Behind the Compensations, Accessions, and Personnel ( CAPM ) Model, Santa Monica, Calif.: RAND Corporation, MR-1667
ERIC Educational Resources Information Center
Kim, Go-en; Chung, Soondool
2016-01-01
Background: This study examines the utility of Pearlin's caregiving stress model for understanding the caregiving satisfaction of elderly mothers of adult children with intellectual disability. Methods: Mothers living in Seoul, Kyonggi, and Incheon who were 55 years of age or older and providing care for adult children with intellectual disability…
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
Modelling stock order flows with non-homogeneous intensities from high-frequency data
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.
2013-10-01
A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).
NASA Astrophysics Data System (ADS)
Ohno, M.; Kawano, T.; Edahiro, I.; Shirakawa, H.; Ohashi, N.; Okada, C.; Habata, S.; Katsuta, J.; Tanaka, Y.; Takahashi, H.; Mizuno, T.; Fukazawa, Y.; Murakami, H.; Kobayashi, S.; Miyake, K.; Ono, K.; Kato, Y.; Furuta, Y.; Murota, Y.; Okuda, K.; Wada, Y.; Nakazawa, K.; Mimura, T.; Kataoka, J.; Ichinohe, Y.; Uchida, Y.; Katsuragawa, M.; Yoneda, H.; Sato, G.; Sato, R.; Kawaharada, M.; Harayama, A.; Odaka, H.; Hayashi, K.; Ohta, M.; Watanabe, S.; Kokubun, M.; Takahashi, T.; Takeda, S.; Kinoshita, M.; Yamaoka, K.; Tajima, H.; Yatsu, Y.; Uchiyama, H.; Saito, S.; Yuasa, T.; Makishima, K.; ASTRO-H HXI/SGD Team
2016-09-01
The hard X-ray Imager and Soft Gamma-ray Detector onboard ASTRO-H demonstrate high sensitivity to hard X-ray (5-80 keV) and soft gamma-rays (60-600 keV), respectively. To reduce the background, both instruments are actively shielded by large, thick Bismuth Germanate scintillators. We have developed the signal processing system of the avalanche photodiode in the BGO active shields and have demonstrated its effectiveness after assembly in the flight model of the HXI/SGD sensor and after integration into the satellite. The energy threshold achieved is about 150 keV and anti-coincidence efficiency for cosmic-ray events is almost 100%. Installed in the BGO active shield, the developed signal processing system successfully reduces the room background level of the main detector.
Two-dimensional time-dependent modelling of fume formation in a pulsed gas metal arc welding process
NASA Astrophysics Data System (ADS)
Boselli, M.; Colombo, V.; Ghedini, E.; Gherardi, M.; Sanibondi, P.
2013-06-01
Fume formation in a pulsed gas metal arc welding (GMAW) process is investigated by coupling a time-dependent axi-symmetric two-dimensional model, which takes into account both droplet detachment and production of metal vapour, with a model for fume formation and transport based on the method of moments for the solution of the aerosol general dynamic equation. We report simulative results of a pulsed process (peak current = 350 A, background current 30 A, period = 9 ms) for a 1 mm diameter iron wire, with Ar shielding gas. Results showed that metal vapour production occurs mainly at the wire tip, whereas fume formation is concentrated in the fringes of the arc in the spatial region close to the workpiece, where metal vapours are transported by convection. The proposed modelling approach allows time-dependent tracking of fumes also in plasma processes where temperature-time variations occur faster than nanoparticle transport from the nucleation region to the surrounding atmosphere, as is the case for most pulsed GMAW processes.
Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard
2008-01-01
Background With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Methods Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. Conclusions As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way. PMID:18460173
The Art and Achievements of the Hohokam.
ERIC Educational Resources Information Center
Patterson, Berniece
2000-01-01
Provides historical background on the ancient Hohokam people. Provides an art activity in which fifth grade students create effigy vessels based on their study of the Hohokam. Describes the process. Explains that students develop clay modeling skills and an appreciation for Hohokam culture. (CMK)
Sizing the science data processing requirements for EOS
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi
1991-01-01
The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.
Image Discrimination Models for Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.
2000-01-01
This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.
Infrared images target detection based on background modeling in the discrete cosine domain
NASA Astrophysics Data System (ADS)
Ye, Han; Pei, Jihong
2018-02-01
Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.
NASA Astrophysics Data System (ADS)
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-09-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Issues related to aircraft take-off plumes in a mesoscale photochemical model.
Bossioli, Elissavet; Tombrou, Maria; Helmis, Costas; Kurtenbach, Ralf; Wiesen, Peter; Schäfer, Klaus; Dandou, Aggeliki; Varotsos, Kostas V
2013-07-01
The physical and chemical characteristics of aircraft plumes at the take-off phase are simulated with the mesoscale CAMx model using the individual plume segment approach, in a highly resolved domain, covering the Athens International Airport. Emission indices during take-off measured at the Athens International Airport are incorporated. Model predictions are compared with in situ point and path-averaged observations (NO, NO₂) downwind of the runway at the ground. The influence of modeling process, dispersion properties and background air composition on the chemical evolution of the aircraft plumes is examined. It is proven that the mixing properties mainly determine the plume dispersion. The initial plume properties become significant for the selection of the appropriate vertical resolution. Besides these factors, the background NOx and O₃ concentration levels control NOx distribution and their conversion to nitrogen reservoir species. Copyright © 2013 Elsevier B.V. All rights reserved.
The interaction of family background and personal education on depressive symptoms in later life.
Schaan, Barbara
2014-02-01
This study assesses the interaction between personal education and family background during childhood on depressive symptoms in later life by applying Ross & Mirowsky's resource substitution and structural amplification theory of health and education. OLS regression models are estimated using data from the "Survey of Health, Ageing and Retirement in Europe" (SHARE), which covers information on current social and health status as well as retrospective life histories from 20,716 respondents aged 50 or older from thirteen European countries. Higher education helps to overcome the negative consequences of a poor family background. Since people from poor families are less likely to attain higher educational levels, they lack exactly the resource they need in order to overcome the negative consequences their non-prosperous background has on depressive symptoms. Thus, low family background and low personal education amplify each other. Examining the processes described by theory of resource substitution and structural amplification over different age groups from midlife to old-age suggests that the moderating effect of education remains constant over age among people coming from a poor family background. However, there is some evidence for a decrease with age in the buffering effect of a well-off family background on depressive symptoms among the low educated group. Furthermore, the educational gap in depression diverges with age among individuals originating from a well-off family background. Taken together the results cautiously allude to the conclusion that three processes - cumulative (dis-)advantage, age-as-leveler, and persistent inequalities - might take place. Copyright © 2013 Elsevier Ltd. All rights reserved.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
Multi-compartmental modeling of SORLA’s influence on amyloidogenic processing in Alzheimer’s disease
2012-01-01
Background Proteolytic breakdown of the amyloid precursor protein (APP) by secretases is a complex cellular process that results in formation of neurotoxic Aβ peptides, causative of neurodegeneration in Alzheimer’s disease (AD). Processing involves monomeric and dimeric forms of APP that traffic through distinct cellular compartments where the various secretases reside. Amyloidogenic processing is also influenced by modifiers such as sorting receptor-related protein (SORLA), an inhibitor of APP breakdown and major AD risk factor. Results In this study, we developed a multi-compartment model to simulate the complexity of APP processing in neurons and to accurately describe the effects of SORLA on these processes. Based on dose–response data, our study concludes that SORLA specifically impairs processing of APP dimers, the preferred secretase substrate. In addition, SORLA alters the dynamic behavior of β-secretase, the enzyme responsible for the initial step in the amyloidogenic processing cascade. Conclusions Our multi-compartment model represents a major conceptual advance over single-compartment models previously used to simulate APP processing; and it identified APP dimers and β-secretase as the two distinct targets of the inhibitory action of SORLA in Alzheimer’s disease. PMID:22727043
Model Calibration in Watershed Hydrology
NASA Technical Reports Server (NTRS)
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)
2001-01-01
A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.
Holistic approach for automated background EEG assessment in asphyxiated full-term infants
NASA Astrophysics Data System (ADS)
Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten
2014-12-01
Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.
Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica.
Kang, Changku; Kim, Ye Eun; Jang, Yikweon
2016-03-02
Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population.
ERIC Educational Resources Information Center
Lloyd, Rebecca
2015-01-01
Background: Physical Education (PE) programmes are expanding to include alternative activities yet what is missing is a conceptual model that facilitates how the learning process may be understood and assessed beyond the dominant sport-technique paradigm. Purpose: The purpose of this article was to feature the emergence of a Function-to-Flow (F2F)…
Robinson, Tommie L; Anderson, Debra; Long, Sahira
2018-02-01
There is a need to better coordinate services for children in urban settings who are at risk for communication disorders. This article addresses the barriers to obtaining services and discusses the process for creating a model for interprofessional practice to better serve patients from lower socioeconomic backgrounds. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Renshaw, Ian; Chow, Jia Yi; Davids, Keith; Hammond, John
2010-01-01
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organisation of learning environments and effective and efficient use of practice time. An…
FPGA implementation for real-time background subtraction based on Horprasert model.
Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo
2012-01-01
Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.
Gate modulation of proton transport in a nanopore.
Mei, Lanju; Yeh, Li-Hsien; Qian, Shizhi
2016-03-14
Proton transport in confined spaces plays a crucial role in many biological processes as well as in modern technological applications, such as fuel cells. To achieve active control of proton conductance, we investigate for the first time the gate modulation of proton transport in a pH-regulated nanopore by a multi-ion model. The model takes into account surface protonation/deprotonation reactions, surface curvature, electroosmotic flow, Stern layer, and electric double layer overlap. The proposed model is validated by good agreement with the existing experimental data on nanopore conductance with and without a gate voltage. The results show that the modulation of proton transport in a nanopore depends on the concentration of the background salt and solution pH. Without background salt, the gated nanopore exhibits an interesting ambipolar conductance behavior when pH is close to the isoelectric point of the dielectric pore material, and the net ionic and proton conductance can be actively regulated with a gate voltage as low as 1 V. The higher the background salt concentration, the lower is the performance of the gate control on the proton transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
Results are reported from a search for new physics processes in events containing a single isolated high-transverse-momentum lepton (electron or muon), energetic jets, and large missing transverse momentum. The analysis is based on a 4.98 fb -1 sample of proton–proton collisions at a center-of-mass energy of 7 TeV, obtained with the CMS detector at the LHC. Three separate background estimation methods, each relying primarily on control samples in the data, are applied to a range of signal regions, providing complementary approaches for estimating the background yields. The observed yields are consistent with the predicted standard model backgrounds. The results are interpreted inmore » terms of limits on the parameter space for the constrained minimal supersymmetric extension of the standard model, as well as on cross sections for simplified models, which provide a generic description of the production and decay of new particles in specific, topology based final states.« less
FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model
Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo
2012-01-01
Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487
Development of permissible exposure limits: the California experience.
Cohen, Richard; Steinmaus, Craig; Quinlan, Patricia; Ku, Robert; Cooper, Michael; Roberts, Tim
2006-01-01
The California OSHA Airborne Contaminant Advisory Committee reviewed several hundred substances and recommended occupational exposure limits with the intent of worker and employer protection. The model used offers important benefits. First, by allowing open meetings, the process was transparent, and input could be offered by concerned stakeholders. Second, the process was data-driven and, therefore, less susceptible to bias and error. Third, by incorporating members with backgrounds in toxicology, epidemiology, risk assessment, occupational medicine, and industrial hygiene, the process fostered a thorough and diverse assessment of substances.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
NASA Astrophysics Data System (ADS)
Li, Zhigang; Liu, Zhifeng; Wu, Zhibin; Zeng, Guangming; Shao, Binbin; Liu, Yujie; Jiang, Yilin; Zhong, Hua; Liu, Yang
2018-05-01
A novel graphene-based material of tea saponin functionalized reduced graphene oxide (TS-RGO) was synthesized via a facil thermal method, and it was characterized as the absorbent for Cd(II) removal from aqueous solutions. The factors on adsorption process including solution pH, contact time, initial concentration of Cd(II) and background electrolyte cations were studied to optimize the conditions for maximum adsorption at room temperature. The results indicated that Cd(II) adsorption was strongly dependent on pH and could be strongly affected by background electrolytes and ionic strength. The optimal pH and required equilibrium time was 6.0 and 10 min, respectively. The Cd(II) removal decreased with the presence of background electrolyte cations (Na+ < Ca2+ < Al3+). The adsorption kinetics of Cd(II) followed well with the pseudo-second-order model. The adsorption isotherm fitted well to the Langmuir model, indicating that the adsorption was a monolayer adsorption process occurred on the homogeneous surfaces of TS-RGO. The maximum monolayer adsorption capacity was 127 mg/g at 313 K and pH 6.0. Therefore, the TS-RGO was considered to be a cost-effective and promising material for the removal of Cd(II) from wastewater.
Resonant Raman scattering background in XRF spectra of binary samples
NASA Astrophysics Data System (ADS)
Sánchez, Héctor Jorge; Leani, Juan José
2015-02-01
In x-ray fluorescence analysis, spectra present singular characteristics produced by the different scattering processes. When atoms are irradiated with incident energy lower and close to an absorption edge, scattering peaks appear due to an inelastic process known as resonant Raman scattering. In this work we present theoretical calculations of the resonant Raman scattering contributions to the background of x-ray fluorescence spectra of binary samples of current technological or biological interest. On one hand, a binary alloy of Fe with traces of Mn (Mn: 0.01%, Fe: 99.99%) was studied because of its importance in the stainless steels industries. On the second hand a pure sample of Ti with V traces (Ti: 99%, V: 1%) was analyzed due to the current relevance in medical applications. In order to perform the calculations the Shiraiwa and Fujino's model was used to calculate characteristic intensities and scattering interactions. This model makes certain assumptions and approximations to achieve the calculations, especially in the case of the geometrical conditions and the incident and take-off beams. For the binary sample studied in this work and the considered experimental conditions, the calculations show that the resonant Raman scattering background is significant under the fluorescent peak, affects the symmetry of the peaks and, depending on the concentrations, overcomes the enhancements contributions (secondary fluorescence).
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
NASA Astrophysics Data System (ADS)
Yoshida, K.; Naoe, H.
2016-12-01
Whether climate models drive Quasi-Biennial Oscillation (QBO) appropriately is important to assess QBO impact on climate change such as global warming and solar related variation. However, there were few models generating QBO in the Coupled Model Intercomparison Project Phase 5 (CMIP5). This study focuses on dynamical structure of the QBO and its sensitivity to background wind pattern and model configuration. We present preliminary results of experiments designed by "Towards Improving the QBO in Global Climate Models (QBOi)", which is derived from the Stratosphere-troposphere processes and their role in climate (SPARC), in the Meteorological Research Institute earth system model, MRI-ESM2. The simulations were performed in present-day climate condition, repeated annual cycle condition with various CO2 level and sea surface temperatures, and QBO hindcast. In the present climate simulation, zonal wind in the equatorial stratosphere generally exhibits realistic behavior of the QBO. Equatorial zonal wind variability associated with QBO is overestimated in upper stratosphere and underestimated in lower stratosphere. In the MRI-ESM2, the QBO behavior is mainly driven by gravity wave drag parametrization (GWDP) introduced in Hines (1997). Comparing to reanalyses, shortage of resolved wave forcing is found especially in equatorial lower stratosphere. These discrepancies can be attributed to difference in wave forcing, background wind pattern and model configuration. We intend to show results of additional sensitivity experiments to examine how model configuration and background wind pattern affect resolved wave source, wave propagation characteristics, and QBO behavior.
Electrophysiological differences in the processing of affective information in words and pictures.
Hinojosa, José A; Carretié, Luis; Valcárcel, María A; Méndez-Bértolo, Constantino; Pozo, Miguel A
2009-06-01
It is generally assumed that affective picture viewing is related to higher levels of physiological arousal than is the reading of emotional words. However, this assertion is based mainly on studies in which the processing of either words or pictures has been investigated under heterogenic conditions. Positive, negative, relaxing, neutral, and background (stimulus fragments) words and pictures were presented to subjects in two experiments under equivalent experimental conditions. In Experiment 1, neutral words elicited an enhanced late positive component (LPC) that was associated with an increased difficulty in discriminating neutral from background stimuli. In Experiment 2, high-arousing pictures elicited an enhanced early negativity and LPC that were related to a facilitated processing for these stimuli. Thus, it seems that under some circumstances, the processing of affective information captures attention only with more biologically relevant stimuli. Also, these data might be better interpreted on the basis of those models that postulate a different access to affective information for words and pictures.
Anomalous single production of the fourth generation quarks at the CERN LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciftci, R.
Possible anomalous single productions of the fourth standard model generation up and down type quarks at CERN Large Hadron Collider are studied. Namely, pp{yields}u{sub 4}(d{sub 4})X with subsequent u{sub 4}{yields}bW{sup +} process followed by the leptonic decay of the W boson and d{sub 4}{yields}b{gamma} (and its H.c.) decay channel are considered. Signatures of these processes and corresponding standard model backgrounds are discussed in detail. Discovery limits for the quark mass and achievable values of the anomalous coupling strength are determined.
Supporting the Whole Child through Coordinated Policies, Processes, and Practices
ERIC Educational Resources Information Center
Murray, Sharon D.; Hurley, James; Ahmed, Shannon R.
2015-01-01
Background: The Whole School, Whole Community, Whole Child (WSCC) model provides a framework for promoting greater alignment, integration, and collaboration between health and education across the school setting and improving students' cognitive, physical, social, and emotional development. By providing a learning environment that ensures each…
Theoretical model to explain the problem-solving process in physics
NASA Astrophysics Data System (ADS)
Lopez, Carlos
2011-03-01
This work reports a theoretical model developed with the aim to explain the mental mechanisms of knowledge building during the problem-solving process in physics using a hybrid approach of assimilation- formation of concepts. The model has been termed conceptual chains and represents graphic diagrams of conceptual dependency, which have yielded information about the background knowledge required during the learning process, as well as about the formation of diverse structures that correspond to distinct forms of networking concepts Additionally, the conceptual constructs of the model have been classified according to five types of knowledge. Evidence was found about the influence of these structures, as well as of the distinct types of knowledge about the degree of difficulty of the problems. I want to be grateful to Laureate International Universities, Baltimore M.D., USA, for the financing granted for the accomplishment of this work.
Challenges and opportunities for heavy scalar searches in the tt¯ channel at the LHC
Carena, Marcela; Liu, Zhen
2016-11-25
Heavy scalar and pseudoscalar resonance searches through themore » $$gg\\rightarrow S\\rightarrow t\\bar t$$ process are challenging due to the peculiar behavior of the large interference effects with the standard model $$t\\bar t$$ background. Such effects generate non-trivial lineshapes from additional relative phases between the signal and background amplitudes. We provide the analytic expressions for the differential cross sections to understand the interference effects in the heavy scalar signal lineshapes. We extend our study to the case of CP-violation and further consider the effect of bottom quarks in the production and decay processes. We also evaluate the contributions from additional particles to the gluon fusion production process, such as stops and vector-like quarks, that could lead to significant changes in the behavior of the signal lineshapes. Taking into account the large interference effects, we perform lineshape searches at the LHC and discuss the importance of the systematic uncertainties and smearing effects. Lastly, we present projected sensitivities for two LHC performance scenarios to probe the $$gg\\rightarrow S \\rightarrow t\\bar t$$ channel in various models.« less
First results of GERDA Phase II and consistency with background models
NASA Astrophysics Data System (ADS)
Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.
2017-01-01
The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.
A flowgraph model for bladder carcinoma
2014-01-01
Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
NASA Astrophysics Data System (ADS)
Lamb, B. K.; Gonzalez Abraham, R.; Avise, J. C.; Chung, S. H.; Salathe, E. P.; Zhang, Y.; Guenther, A. B.; Wiedinmyer, C.; Duhl, T.; Streets, D. G.
2013-05-01
Global change will clearly have a significant impact on the environment. Among the concerns for future air quality in North America, intercontinental transport of pollution has become increasingly important. In this study, we examined the effect of projected changes in Asian emissions and emissions from lightning and wildfires to produce ozone background concentrations within Mexico and the continental US. This provides a basis for developing an understanding of North American background levels and how they may change in the future. Meteorological fields were downscaled from the results of the ECHAM5 global climate model using the Weather Research Forecast (WRF) model. Two nested domains were employed, one covering most of the Northern Hemisphere from eastern Asia to North America using 220 km grid cells (semi-hemispheric domain) and one covering the continental US and northern Mexico using 36 km grid cells. Meteorological results from WRF were used to drive the MEGAN biogenic emissions model, the SMOKE emissions processing tool, and the CMAQ chemical transport model to predict ozone concentrations for current (1995-2004) and future (2045-2054) summertime conditions. The MEGAN model was used to calculate biogenic emissions for all simulations. For the semi-hemispheric domain, year 2000 global emissions of gases (ozone precursors) from anthropogenic (outside of North America), natural, and biomass burning sources from the POET and EDGAR emission inventories were used. The global tabulation for black and organic carbon (BC and OC respectively) was obtained from Bond et al. (2004) For the future decade, the current emissions were projected to the year 2050 following the Intergovernmental Panel for Climate Change (IPCC) A1B emission scenario. Anthropogenic emissions from the US, Canada, and Mexico were omitted so that only global background concentrations, and local biogenic, wildfire, and lightning emissions were treated. In this paper, we focus on background ozone levels in Mexico due to changes in future climate, local biogenic emissions and global emissions.
Using texts in science education: cognitive processes and knowledge representation.
van den Broek, Paul
2010-04-23
Texts form a powerful tool in teaching concepts and principles in science. How do readers extract information from a text, and what are the limitations in this process? Central to comprehension of and learning from a text is the construction of a coherent mental representation that integrates the textual information and relevant background knowledge. This representation engenders learning if it expands the reader's existing knowledge base or if it corrects misconceptions in this knowledge base. The Landscape Model captures the reading process and the influences of reader characteristics (such as working-memory capacity, reading goal, prior knowledge, and inferential skills) and text characteristics (such as content/structure of presented information, processing demands, and textual cues). The model suggests factors that can optimize--or jeopardize--learning science from text.
Solar energetic particle transport and the possibility of wave generation by streaming electrons
NASA Astrophysics Data System (ADS)
Strauss, R. D. T.; le Roux, J. A.
2017-12-01
After being accelerated close to the Sun, solar energetic particles (SEPs) are transported (mainly) along the turbulent interplanetary magnetic field. In this study, we simulate the propagation of 100 keV electrons as they are scattered in the interplanetary medium. A consequence of these wave-particle interactions is the possible modification (either growth or damping) of the background turbulence by anisotropic SEP electron beams. This process was thought to be negligible, and therefore neglected in past modeling approaches. However, recent observations and modeling by Agueda and Lario (2016) suggest that wave generation may be significant and is therefore included and evaluated in our present model. Our results suggest that wave amplification by streaming SEP electrons is indeed possible and may even significantly alter the background turbulent field. However, the simulations show that this process is much too weak to produce observable effects at Earth's orbit, but such effects may well be observed in future by spacecraft closer to the Sun, presenting an intriguing observational opportunity for either the Solar Orbiter or the Parker Solar Probe spacecraft. Lastly, we note that the level of perpendicular diffusion may also play an important role in determining the effectiveness of the wave growth process. Reference: Agueda, N. and Lario, D. Release History and Transport Parameters of Relativistic Solar Electrons Inferred From Near-the-Sun In Situ Observations, ApJ, 829, 131, 2016.
Radiometric spectral and band rendering of targets using anisotropic BRDFs and measured backgrounds
NASA Astrophysics Data System (ADS)
Hilgers, John W.; Hoffman, Jeffrey A.; Reynolds, William R.; Jafolla, James C.
2000-07-01
Achievement of ultra-high fidelity signature modeling of targets requires a significant level of complexity for all of the components required in the rendering process. Specifically, the reflectance of the surface must be described using the bi-directional distribution function (BRDF). In addition, the spatial representation of the background must be high fidelity. A methodology and corresponding model for spectral and band rendering of targets using both isotropic and anisotropic BRDFs is presented. In addition, a set of tools will be described for generating theoretical anisotropic BRDFs and for reducing data required for a description of an anisotropic BRDF by 5 orders of magnitude. This methodology is hybrid using a spectrally measured panoramic of the background mapped to a large hemisphere. Both radiosity and ray-tracing approaches are incorporated simultaneously for a robust solution. In the thermal domain the spectral emission is also included in the solution. Rendering examples using several BRDFs will be presented.
Interpretation of the COBE FIRAS CMBR spectrum
NASA Technical Reports Server (NTRS)
Wright, E. L.; Mather, J. C.; Fixsen, D. J.; Kogut, A.; Shafer, R. A.; Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Silverberg, R. F.; Smoot, G. F.
1994-01-01
The cosmic microwave background radiation (CMBR) spectrum measured by the Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on NASA's Cosmic Background Explorer (COBE) is indistinguishable from a blackbody, implying stringent limits on energy release in the early universe later than the time t = 1 yr after the big bang. We compare the FIRAS data to previous precise measurements of the cosmic microwave background spectrum and find a reasonable agreement. We discuss the implications of the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) 95% confidence limits found by Mather et al. (1994) on many processes occurring after t = 1 yr, such as explosive structure formation, reionization, and dissipation of small-scale density perturbations. We place limits on models with dust plus Population III stars, or evolving populations of IR galaxies, by directly comparing the Mather et al. spectrum to the model predictions.
ERIC Educational Resources Information Center
Tian-Ping, Yang
2012-01-01
Since implementation of reform and opening-up policy, China's teacher education has got significant success on policy design, legislation process, theory research, system reform, model innovation and teaching qualification system building. Teachers' educational background level has been increased. Teachers' professional ethics and teaching…
USDA-ARS?s Scientific Manuscript database
Background/Question/Methods Global climate change models predict increasing drought during the growing season, which will alter many ecosystem processes including soil CO2 efflux (JCO2), with potential consequences for carbon retention in soils. Soil moisture, soil temperature and plant traits such...
Background/Question/Methods The effectiveness of riparian forest buffers and other green infrastructure for reducing nitrogen export to agricultural streams has been well described experimentally, but a clear understanding of process-level hydrological and biogeochemical control...
Situated Instructional Coaching: A Case Study of Faculty Professional Development
ERIC Educational Resources Information Center
Czajka, Charles Doug; McConnell, David
2016-01-01
Background: Barriers to reforming traditional lecture-based undergraduate STEM classes are numerous and include time constraints, lack of training, and instructor's beliefs about teaching and learning. This case study documents the use of a situated instructional coaching process as a method of faculty professional development. In this model, a…
Prenatal Exposure to Maternal Depression and Cortisol Influences Infant Temperament
ERIC Educational Resources Information Center
Davis, Elysia Poggi; Glynn, Laura M.; Schetter, Christine Dunkel; Hobel, Calvin; Chicz-Demet, Aleksandra; Sandman, Curt A.
2007-01-01
Background: Accumulating evidence indicates that prenatal maternal and fetal processes can have a lasting influence on infant and child development. Results from animal models indicate that prenatal exposure to maternal stress and stress hormones has lasting consequences for development of the offspring. Few prospective studies of human pregnancy…
Differentiation in Outcome-Focused Physical Education: Pedagogical Rhetoric and Reality
ERIC Educational Resources Information Center
Whipp, Peter; Taggart, Andrew; Jackson, Ben
2014-01-01
Background: This study was grounded in the differentiated instructional model where teachers tailor content, process/support, and product in response to their students' levels of readiness and interest. The value of differentiated teaching is well established; however, the implementation of such a technique is difficult due to differences in…
ERIC Educational Resources Information Center
Fryer, Luke K.; Vermunt, Jan D.
2018-01-01
Background: Contemporary models of student learning within higher education are often inclusive of processing and regulation strategies. Considerable research has examined their use over time and their (person-centred) convergence. The longitudinal stability/variability of learning strategy use, however, is poorly understood, but essential to…
Sleep Disruptions and Emotional Insecurity Are Pathways of Risk for Children
ERIC Educational Resources Information Center
El-Sheikh, Mona; Buckhalt, Joseph A.; Cummings, E. Mark; Keller, Peggy
2007-01-01
Background: Sleep problems are prevalent in American children. A critical need is to identify sources and processes related to sleep disruptions and their sequelae. We examined a model linking parental marital conflict and children's emotional insecurity, sleep disruptions, and their adjustment and academic problems. Method: One hundred and…
Regulation of Motivation: Contextual and Social Aspects
ERIC Educational Resources Information Center
Wolters, Christopher A.
2011-01-01
Background: Models of self-regulated learning have been used extensively as a way of understanding how students understand, monitor, and manage their own academic functioning. The regulation of motivation is a facet of self-regulated learning that describes students' efforts to control their own motivation or motivational processing. The…
Plasma Processes for Semiconductor Fabrication
NASA Astrophysics Data System (ADS)
Hitchon, W. N. G.
1999-01-01
Plasma processing is a central technique in the fabrication of semiconductor devices. This self-contained book provides an up-to-date description of plasma etching and deposition in semiconductor fabrication. It presents the basic physics and chemistry of these processes, and shows how they can be accurately modeled. The author begins with an overview of plasma reactors and discusses the various models for understanding plasma processes. He then covers plasma chemistry, addressing the effects of different chemicals on the features being etched. Having presented the relevant background material, he then describes in detail the modeling of complex plasma systems, with reference to experimental results. The book closes with a useful glossary of technical terms. No prior knowledge of plasma physics is assumed in the book. It contains many homework exercises and serves as an ideal introduction to plasma processing and technology for graduate students of electrical engineering and materials science. It will also be a useful reference for practicing engineers in the semiconductor industry.
Emulating a flexible space structure: Modeling
NASA Technical Reports Server (NTRS)
Waites, H. B.; Rice, S. C.; Jones, V. L.
1988-01-01
Control Dynamics, in conjunction with Marshall Space Flight Center, has participated in the modeling and testing of Flexible Space Structures. Through the series of configurations tested and the many techniques used for collecting, analyzing, and modeling the data, many valuable insights have been gained and important lessons learned. This paper discusses the background of the Large Space Structure program, Control Dynamics' involvement in testing and modeling of the configurations (especially the Active Control Technique Evaluation for Spacecraft (ACES) configuration), the results from these two processes, and insights gained from this work.
Yamamoto, Satoshi; Ooshima, Yuki; Nakata, Mitsugu; Yano, Takashi; Matsuoka, Kunio; Watanabe, Sayuri; Maeda, Ryouta; Takahashi, Hideki; Takeyama, Michiyasu; Matsumoto, Yoshio; Hashimoto, Tadatoshi
2013-06-01
Gene-targeting technology using mouse embryonic stem (ES) cells has become the "gold standard" for analyzing gene functions and producing disease models. Recently, genetically modified mice with multiple mutations have increasingly been produced to study the interaction between proteins and polygenic diseases. However, introduction of an additional mutation into mice already harboring several mutations by conventional natural crossbreeding is an extremely time- and labor-intensive process. Moreover, to do so in mice with a complex genetic background, several years may be required if the genetic background is to be retained. Establishing ES cells from multiple-mutant mice, or disease-model mice with a complex genetic background, would offer a possible solution. Here, we report the establishment and characterization of novel ES cell lines from a mouse model of Alzheimer's disease (3xTg-AD mouse, Oddo et al. in Neuron 39:409-421, 2003) harboring 3 mutated genes (APPswe, TauP301L, and PS1M146V) and a complex genetic background. Thirty blastocysts were cultured and 15 stable ES cell lines (male: 11; female: 4) obtained. By injecting these ES cells into diploid or tetraploid blastocysts, we generated germline-competent chimeras. Subsequently, we confirmed that F1 mice derived from these animals showed similar biochemical and behavioral characteristics to the original 3xTg-AD mice. Furthermore, we introduced a gene-targeting vector into the ES cells and successfully obtained gene-targeted ES cells, which were then used to generate knockout mice for the targeted gene. These results suggest that the present methodology is effective for introducing an additional mutation into mice already harboring multiple mutated genes and/or a complex genetic background.
Background studies for the MINER Coherent Neutrino Scattering reactor experiment
NASA Astrophysics Data System (ADS)
Agnolet, G.; Baker, W.; Barker, D.; Beck, R.; Carroll, T. J.; Cesar, J.; Cushman, P.; Dent, J. B.; De Rijck, S.; Dutta, B.; Flanagan, W.; Fritts, M.; Gao, Y.; Harris, H. R.; Hays, C. C.; Iyer, V.; Jastram, A.; Kadribasic, F.; Kennedy, A.; Kubik, A.; Lang, K.; Mahapatra, R.; Mandic, V.; Marianno, C.; Martin, R. D.; Mast, N.; McDeavitt, S.; Mirabolfathi, N.; Mohanty, B.; Nakajima, K.; Newhouse, J.; Newstead, J. L.; Ogawa, I.; Phan, D.; Proga, M.; Rajput, A.; Roberts, A.; Rogachev, G.; Salazar, R.; Sander, J.; Senapati, K.; Shimada, M.; Soubasis, B.; Strigari, L.; Tamagawa, Y.; Teizer, W.; Vermaak, J. I. C.; Villano, A. N.; Walker, J.; Webb, B.; Wetzel, Z.; Yadavalli, S. A.
2017-05-01
The proposed Mitchell Institute Neutrino Experiment at Reactor (MINER) experiment at the Nuclear Science Center at Texas A&M University will search for coherent elastic neutrino-nucleus scattering within close proximity (about 2 m) of a 1 MW TRIGA nuclear reactor core using low threshold, cryogenic germanium and silicon detectors. Given the Standard Model cross section of the scattering process and the proposed experimental proximity to the reactor, as many as 5-20 events/kg/day are expected. We discuss the status of preliminary measurements to characterize the main backgrounds for the proposed experiment. Both in situ measurements at the experimental site and simulations using the MCNP and GEANT4 codes are described. A strategy for monitoring backgrounds during data taking is briefly discussed.
A finite element simulation of biological conversion processes in landfills.
Robeck, M; Ricken, T; Widmann, R
2011-04-01
Landfills are the most common way of waste disposal worldwide. Biological processes convert the organic material into an environmentally harmful landfill gas, which has an impact on the greenhouse effect. After the depositing of waste has been stopped, current conversion processes continue and emissions last for several decades and even up to 100years and longer. A good prediction of these processes is of high importance for landfill operators as well as for authorities, but suitable models for a realistic description of landfill processes are rather poor. In order to take the strong coupled conversion processes into account, a constitutive three-dimensional model based on the multiphase Theory of Porous Media (TPM) has been developed at the University of Duisburg-Essen. The theoretical formulations are implemented in the finite element code FEAP. With the presented calculation concept we are able to simulate the coupled processes that occur in an actual landfill. The model's theoretical background and the results of the simulations as well as the meantime successfully performed simulation of a real landfill body will be shown in the following. Copyright © 2010 Elsevier Ltd. All rights reserved.
Engaging Students In Modeling Instruction for Introductory Physics
NASA Astrophysics Data System (ADS)
Brewe, Eric
2016-05-01
Teaching introductory physics is arguably one of the most important things that a physics department does. It is the primary way that students from other science disciplines engage with physics and it is the introduction to physics for majors. Modeling instruction is an active learning strategy for introductory physics built on the premise that science proceeds through the iterative process of model construction, development, deployment, and revision. We describe the role that participating in authentic modeling has in learning and then explore how students engage in this process in the classroom. In this presentation, we provide a theoretical background on models and modeling and describe how these theoretical elements are enacted in the introductory university physics classroom. We provide both quantitative and video data to link the development of a conceptual model to the design of the learning environment and to student outcomes. This work is supported in part by DUE #1140706.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Colour and pattern change against visually heterogeneous backgrounds in the tree frog Hyla japonica
Kang, Changku; Kim, Ye Eun; Jang, Yikweon
2016-01-01
Colour change in animals can be adaptive phenotypic plasticity in heterogeneous environments. Camouflage through background colour matching has been considered a primary force that drives the evolution of colour changing ability. However, the mechanism to which animals change their colour and patterns under visually heterogeneous backgrounds (i.e. consisting of more than one colour) has only been identified in limited taxa. Here, we investigated the colour change process of the Japanese tree frog (Hyla japonica) against patterned backgrounds and elucidated how the expression of dorsal patterns changes against various achromatic/chromatic backgrounds with/without patterns. Our main findings are i) frogs primarily responded to the achromatic differences in background, ii) their contrasting dorsal patterns were conditionally expressed dependent on the brightness of backgrounds, iii) against mixed coloured background, frogs adopted intermediate forms between two colours. Using predator (avian and snake) vision models, we determined that colour differences against different backgrounds yielded perceptible changes in dorsal colours. We also found substantial individual variation in colour changing ability and the levels of dorsal pattern expression between individuals. We discuss the possibility of correlational selection on colour changing ability and resting behaviour that maintains the high variation in colour changing ability within population. PMID:26932675
PREDICTION METRICS FOR CHEMICAL DETECTION IN LONG-WAVE INFRARED HYPERSPECTRAL IMAGERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chilton, M.; Walsh, S.J.; Daly, D.S.
2009-01-01
Natural and man-made chemical processes generate gaseous plumes that may be detected by hyperspectral imaging, which produces a matrix of spectra affected by the chemical constituents of the plume, the atmosphere, the bounding background surface and instrument noise. A physics-based model of observed radiance shows that high chemical absorbance and low background emissivity result in a larger chemical signature. Using simulated hyperspectral imagery, this study investigated two metrics which exploited this relationship. The objective was to explore how well the chosen metrics predicted when a chemical would be more easily detected when comparing one background type to another. The twomore » predictor metrics correctly rank ordered the backgrounds for about 94% of the chemicals tested as compared to the background rank orders from Whitened Matched Filtering (a detection algorithm) of the simulated spectra. These results suggest that the metrics provide a reasonable summary of how the background emissivity and chemical absorbance interact to produce the at-sensor chemical signal. This study suggests that similarly effective predictors that account for more general physical conditions may be derived.« less
Regional and global modeling estimates of policy relevant background ozone over the United States
NASA Astrophysics Data System (ADS)
Emery, Christopher; Jung, Jaegun; Downey, Nicole; Johnson, Jeremiah; Jimenez, Michele; Yarwood, Greg; Morris, Ralph
2012-02-01
Policy Relevant Background (PRB) ozone, as defined by the US Environmental Protection Agency (EPA), refers to ozone concentrations that would occur in the absence of all North American anthropogenic emissions. PRB enters into the calculation of health risk benefits, and as the US ozone standard approaches background levels, PRB is increasingly important in determining the feasibility and cost of compliance. As PRB is a hypothetical construct, modeling is a necessary tool. Since 2006 EPA has relied on global modeling to establish PRB for their regulatory analyses. Recent assessments with higher resolution global models exhibit improved agreement with remote observations and modest upward shifts in PRB estimates. This paper shifts the paradigm to a regional model (CAMx) run at 12 km resolution, for which North American boundary conditions were provided by a low-resolution version of the GEOS-Chem global model. We conducted a comprehensive model inter-comparison, from which we elucidate differences in predictive performance against ozone observations and differences in temporal and spatial background variability over the US. In general, CAMx performed better in replicating observations at remote monitoring sites, and performance remained better at higher concentrations. While spring and summer mean PRB predicted by GEOS-Chem ranged 20-45 ppb, CAMx predicted PRB ranged 25-50 ppb and reached well over 60 ppb in the west due to event-oriented phenomena such as stratospheric intrusion and wildfires. CAMx showed a higher correlation between modeled PRB and total observed ozone, which is significant for health risk assessments. A case study during April 2006 suggests that stratospheric exchange of ozone is underestimated in both models on an event basis. We conclude that wildfires, lightning NO x and stratospheric intrusions contribute a significant level of uncertainty in estimating PRB, and that PRB will require careful consideration in the ozone standard setting process.
NASA Technical Reports Server (NTRS)
Weidenspointner, G.; Harris, M. J.; Sturner, S.; Teegarden, B. J.; Ferguson, C.
2004-01-01
Intense and complex instrumental backgrounds, against which the much smaller signals from celestial sources have to be discerned, are a notorious problem for low and intermediate energy gamma-ray astronomy (approximately 50 keV - 10 MeV). Therefore a detailed qualitative and quantitative understanding of instrumental line and continuum backgrounds is crucial for most stages of gamma-ray astronomy missions, ranging from the design and development of new instrumentation through performance prediction to data reduction. We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package, to simulate ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition t o continuum backgrounds. The MGGPOD package and documentation are publicly available for download. We demonstrate the capabilities of the MGGPOD suite by modeling high resolution gamma-ray spectra recorded by the Transient Gamma-Ray Spectrometer (TGRS) on board Wind during 1995. The TGRS is a Ge spectrometer operating in the 40 keV to 8 MeV range. Due to its fine energy resolution, these spectra reveal the complex instrumental background in formidable detail, particularly the many prompt and delayed gamma-ray lines. We evaluate the successes and failures of the MGGPOD package in reproducing TGRS data, and provide identifications for the numerous instrumental lines.
Macro-magnetic Modeling of the ARL Microelectromechanical System (MEMS) Flux Concentrator
2011-09-01
are drawn as solid pieces and assigned the material properties of permalloy (nickel-iron [ NiFe ]) with a permeability of 5,000 as that is a value...energy densities, and saturation. The modeling process consists of drawing the objects of interest, assigning properties (coercivity, permeability...that is readily achieved in thin films of the material. The material properties assigned to this background are those of a vacuum, with a relative
Background | Office of Cancer Clinical Proteomics Research
The term "proteomics" refers to a large-scale comprehensive study of a specific proteome resulting from its genome, including abundances of proteins, their variations and modifications, and interacting partners and networks in order to understand cellular processes involved. Similarly, “Cancer proteomics” refers to comprehensive analyses of proteins and their derivatives translated from a specific cancer genome using a human biospecimen or a preclinical model (e.g., cultured cell or animal model).
A unified account of tilt illusions, association fields, and contour detection based on elastica.
Keemink, Sander W; van Rossum, Mark C W
2016-09-01
As expressed in the Gestalt law of good continuation, human perception tends to associate stimuli that form smooth continuations. Contextual modulation in primary visual cortex, in the form of association fields, is believed to play an important role in this process. Yet a unified and principled account of the good continuation law on the neural level is lacking. In this study we introduce a population model of primary visual cortex. Its contextual interactions depend on the elastica curvature energy of the smoothest contour connecting oriented bars. As expected, this model leads to association fields consistent with data. However, in addition the model displays tilt-illusions for stimulus configurations with grating and single bars that closely match psychophysics. Furthermore, the model explains not only pop-out of contours amid a variety of backgrounds, but also pop-out of single targets amid a uniform background. We thus propose that elastica is a unifying principle of the visual cortical network. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Measurement of a model of implementation for health care: toward a testable theory
2012-01-01
Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451
Analysis for signal-to-noise ratio of hyper-spectral imaging FTIR interferometer
NASA Astrophysics Data System (ADS)
Li, Xun-niu; Zheng, Wei-jian; Lei, Zheng-gang; Wang, Hai-yang; Fu, Yan-peng
2013-08-01
Signal-to-noise Ratio of hyper-spectral imaging FTIR interferometer system plays a decisive role on the performance of the instrument. It is necessary to analyze them in the development process. Based on the simplified target/background model, the energy transfer model of the LWIR hyper-spectral imaging interferometer has been discussed. The noise equivalent spectral radiance (NESR) and its influencing factors of the interferometer system was analyzed, and the signal-to-noise(SNR) was calculated by using the properties of NESR and incident radiance. In a typical application environment, using standard atmospheric model of USA(1976 COESA) as a background, and set a reasonable target/background temperature difference, and take Michelson spatial modulation Fourier Transform interferometer as an example, the paper had calculated the NESR and the SNR of the interferometer system which using the commercially LWIR cooled FPA and UFPA detector. The system noise sources of the instrument were also analyzed in the paper. The results of those analyses can be used to optimize and pre-estimate the performance of the interferometer system, and analysis the applicable conditions of use different detectors. It has important guiding significance for the LWIR interferometer spectrometer design.
Relaxation and coarsening of weakly-interacting breathers in a simplified DNLS chain
NASA Astrophysics Data System (ADS)
Iubini, Stefano; Politi, Antonio; Politi, Paolo
2017-07-01
The discrete nonlinear Schrödinger (DNLS) equation displays a parameter region characterized by the presence of localized excitations (breathers). While their formation is well understood and it is expected that the asymptotic configuration comprises a single breather on top of a background, it is not clear why the dynamics of a multi-breather configuration is essentially frozen. In order to investigate this question, we introduce simple stochastic models, characterized by suitable conservation laws. We focus on the role of the coupling strength between localized excitations and background. In the DNLS model, higher breathers interact more weakly, as a result of their faster rotation. In our stochastic models, the strength of the coupling is controlled directly by an amplitude-dependent parameter. In the case of a power-law decrease, the associated coarsening process undergoes a slowing down if the decay rate is larger than a critical value. In the case of an exponential decrease, a freezing effect is observed that is reminiscent of the scenario observed in the DNLS. This last regime arises spontaneously when direct energy diffusion between breathers and background is blocked below a certain threshold.
Chatrchyan, S; Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Aguilo, E; Bergauer, T; Dragicevic, M; Erö, J; Fabjan, C; Friedl, M; Frühwirth, R; Ghete, V M; Hammer, J; Hörmann, N; Hrubec, J; Jeitler, M; Kiesenhofer, W; Knünz, V; Krammer, M; Krätschmer, I; Liko, D; Mikulec, I; Pernicka, M; Rahbaran, B; Rohringer, C; Rohringer, H; Schöfbeck, R; Strauss, J; Taurok, A; Waltenberger, W; Walzel, G; Widl, E; Wulz, C-E; Mossolov, V; Shumeiko, N; Suarez Gonzalez, J; Bansal, M; Bansal, S; Cornelis, T; De Wolf, E A; Janssen, X; Luyckx, S; Mucibello, L; Ochesanu, S; Roland, B; Rougny, R; Selvaggi, M; Staykova, Z; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Blekman, F; Blyweert, S; D'Hondt, J; Gonzalez Suarez, R; Kalogeropoulos, A; Maes, M; Olbrechts, A; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Villella, I; Clerbaux, B; De Lentdecker, G; Dero, V; Gay, A P R; Hreus, T; Léonard, A; Marage, P E; Mohammadi, A; Reis, T; Thomas, L; Vander Marcken, G; Vander Velde, C; Vanlaer, P; Wang, J; Adler, V; Beernaert, K; Cimmino, A; Costantini, S; Garcia, G; Grunewald, M; Klein, B; Lellouch, J; Marinov, A; Mccartin, J; Ocampo Rios, A A; Ryckbosch, D; Strobbe, N; Thyssen, F; Tytgat, M; Verwilligen, P; Walsh, S; Yazgan, E; Zaganidis, N; Basegmez, S; Bruno, G; Castello, R; Ceard, L; Delaere, C; du Pree, T; Favart, D; Forthomme, L; Giammanco, A; Hollar, J; Lemaitre, V; Liao, J; Militaru, O; Nuttens, C; Pagano, D; Pin, A; Piotrzkowski, K; Schul, N; Vizan Garcia, J M; Beliy, N; Caebergs, T; Daubie, E; Hammad, G H; Alves, G A; Correa Martins Junior, M; Martins, T; Pol, M E; Souza, M H G; Aldá Júnior, W L; Carvalho, W; Custódio, A; Da Costa, E M; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Matos Figueiredo, D; Mundim, L; Nogima, H; Prado Da Silva, W L; Santoro, A; Soares Jorge, L; Sznajder, A; Anjos, T S; Bernardes, C A; Dias, F A; Fernandez Perez Tomei, T R; Gregores, E M; Lagana, C; Marinho, F; Mercadante, P G; Novaes, S F; Padula, Sandra S; Genchev, V; Iaydjiev, P; Piperov, S; Rodozov, M; Stoykova, S; Sultanov, G; Tcholakov, V; Trayanov, R; Vutova, M; Dimitrov, A; Hadjiiska, R; Kozhuharov, V; Litov, L; Pavlov, B; Petkov, P; Bian, J G; Chen, G M; Chen, H S; Jiang, C H; Liang, D; Liang, S; Meng, X; Tao, J; Wang, J; Wang, X; Wang, Z; Xiao, H; Xu, M; Zang, J; Zhang, Z; Asawatangtrakuldee, C; Ban, Y; Guo, Y; Li, W; Liu, S; Mao, Y; Qian, S J; Teng, H; Wang, D; Zhang, L; Zou, W; Avila, C; Gomez, J P; Gomez Moreno, B; Osorio Oliveros, A F; Sanabria, J C; Godinovic, N; Lelas, D; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Kovac, M; Brigljevic, V; Duric, S; Kadija, K; Luetic, J; Morovic, S; Attikis, A; Galanti, M; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Finger, M; Finger, M; Assran, Y; Elgammal, S; Ellithi Kamel, A; Mahmoud, M A; Radi, A; Kadastik, M; Müntel, M; Raidal, M; Rebane, L; Tiko, A; Eerola, P; Fedi, G; Voutilainen, M; Härkönen, J; Heikkinen, A; Karimäki, V; Kinnunen, R; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Peltola, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Ungaro, D; Wendland, L; Banzuzi, K; Karjalainen, A; Korpela, A; Tuuva, T; Besancon, M; Choudhury, S; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Locci, E; Malcles, J; Millischer, L; Nayak, A; Rander, J; Rosowsky, A; Shreyber, I; Titov, M; Baffioni, S; Beaudette, F; Benhabib, L; Bianchini, L; Bluj, M; Broutin, C; Busson, P; Charlot, C; Daci, N; Dahms, T; Dalchenko, M; Dobrzynski, L; Granier de Cassagnac, R; Haguenauer, M; Miné, P; Mironov, C; Naranjo, I N; Nguyen, M; Ochando, C; Paganini, P; Sabes, D; Salerno, R; Sirois, Y; Veelken, C; Zabi, A; Agram, J-L; Andrea, J; Bloch, D; Bodin, D; Brom, J-M; Cardaci, M; Chabert, E C; Collard, C; Conte, E; Drouhin, F; Ferro, C; Fontaine, J-C; Gelé, D; Goerlach, U; Juillot, P; Le Bihan, A-C; Van Hove, P; Fassi, F; Mercier, D; Beauceron, S; Beaupere, N; Bondu, O; Boudoul, G; Chasserat, J; Chierici, R; Contardo, D; Depasse, P; El Mamouni, H; Fay, J; Gascon, S; Gouzevitch, M; Ille, B; Kurca, T; Lethuillier, M; Mirabito, L; Perries, S; Sgandurra, L; Sordini, V; Tschudi, Y; Verdier, P; Viret, S; Tsamalaidze, Z; Anagnostou, G; Autermann, C; Beranek, S; Edelhoff, M; Feld, L; Heracleous, N; Hindrichs, O; Jussen, R; Klein, K; Merz, J; Ostapchuk, A; Perieanu, A; Raupach, F; Sammet, J; Schael, S; Sprenger, D; Weber, H; Wittmer, B; Zhukov, V; Ata, M; Caudron, J; Dietz-Laursonn, E; Duchardt, D; Erdmann, M; Fischer, R; Güth, A; Hebbeker, T; Heidemann, C; Hoepfner, K; Klingebiel, D; Kreuzer, P; Merschmeyer, M; Meyer, A; Olschewski, M; Papacz, P; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Steggemann, J; Teyssier, D; Weber, M; Bontenackels, M; Cherepanov, V; Erdogan, Y; Flügge, G; Geenen, H; Geisler, M; Haj Ahmad, W; Hoehle, F; Kargoll, B; Kress, T; Kuessel, Y; Lingemann, J; Nowack, A; Perchalla, L; Pooth, O; Sauerland, P; Stahl, A; Aldaya Martin, M; Behr, J; Behrenhoff, W; Behrens, U; Bergholz, M; Bethani, A; Borras, K; Burgmeier, A; Cakir, A; Calligaris, L; Campbell, A; Castro, E; Costanza, F; Dammann, D; Diez Pardos, C; Eckerlin, G; Eckstein, D; Flucke, G; Geiser, A; Glushkov, I; Gunnellini, P; Habib, S; Hauk, J; Hellwig, G; Jung, H; Kasemann, M; Katsas, P; Kleinwort, C; Kluge, H; Knutsson, A; Krämer, M; Krücker, D; Kuznetsova, E; Lange, W; Lohmann, W; Lutz, B; Mankel, R; Marfin, I; Marienfeld, M; Melzer-Pellmann, I-A; Meyer, A B; Mnich, J; Mussgiller, A; Naumann-Emme, S; Novgorodova, O; Olzem, J; Perrey, H; Petrukhin, A; Pitzl, D; Raspereza, A; Ribeiro Cipriano, P M; Riedl, C; Ron, E; Rosin, M; Salfeld-Nebgen, J; Schmidt, R; Schoerner-Sadenius, T; Sen, N; Spiridonov, A; Stein, M; Walsh, R; Wissing, C; Blobel, V; Draeger, J; Enderle, H; Erfle, J; Gebbert, U; Görner, M; Hermanns, T; Höing, R S; Kaschube, K; Kaussen, G; Kirschenmann, H; Klanner, R; Lange, J; Mura, B; Nowak, F; Peiffer, T; Pietsch, N; Rathjens, D; Sander, C; Schettler, H; Schleper, P; Schlieckau, E; Schmidt, A; Schröder, M; Schum, T; Seidel, M; Sibille, J; Sola, V; Stadie, H; Steinbrück, G; Thomsen, J; Vanelderen, L; Barth, C; Berger, J; Böser, C; Chwalek, T; De Boer, W; Descroix, A; Dierlamm, A; Feindt, M; Guthoff, M; Hackstein, C; Hartmann, F; Hauth, T; Heinrich, M; Held, H; Hoffmann, K H; Husemann, U; Katkov, I; Komaragiri, J R; Lobelle Pardo, P; Martschei, D; Mueller, S; Müller, Th; Niegel, M; Nürnberg, A; Oberst, O; Oehler, A; Ott, J; Quast, G; Rabbertz, K; Ratnikov, F; Ratnikova, N; Röcker, S; Schilling, F-P; Schott, G; Simonis, H J; Stober, F M; Troendle, D; Ulrich, R; Wagner-Kuhr, J; Wayand, S; Weiler, T; Zeise, M; Daskalakis, G; Geralis, T; Kesisoglou, S; Kyriakis, A; Loukas, D; Manolakos, I; Markou, A; Markou, C; Mavrommatis, C; Ntomari, E; Gouskos, L; Mertzimekis, T J; Panagiotou, A; Saoulidou, N; Evangelou, I; Foudas, C; Kokkas, P; Manthos, N; Papadopoulos, I; Patras, V; Bencze, G; Hajdu, C; Hidas, P; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Beni, N; Czellar, S; Molnar, J; Palinkas, J; Szillasi, Z; Karancsi, J; Raics, P; Trocsanyi, Z L; Ujvari, B; Beri, S B; Bhatnagar, V; Dhingra, N; Gupta, R; Kaur, M; Mehta, M Z; Nishu, N; Saini, L K; Sharma, A; Singh, J B; Kumar, Ashok; Kumar, Arun; Ahuja, S; Bhardwaj, A; Choudhary, B C; Malhotra, S; Naimuddin, M; Ranjan, K; Sharma, V; Shivpuri, R K; Banerjee, S; Bhattacharya, S; Dutta, S; Gomber, B; Jain, Sa; Jain, Sh; Khurana, R; Sarkar, S; Sharan, M; Abdulsalam, A; Choudhury, R K; Dutta, D; Kailas, S; Kumar, V; Mehta, P; Mohanty, A K; Pant, L M; Shukla, P; Aziz, T; Ganguly, S; Guchait, M; Maity, M; Majumder, G; Mazumdar, K; Mohanty, G B; Parida, B; Sudhakar, K; Wickramage, N; Banerjee, S; Dugad, S; Arfaei, H; Bakhshiansohi, H; Etesami, S M; Fahim, A; Hashemi, M; Hesari, H; Jafari, A; Khakzad, M; Mohammadi Najafabadi, M; Paktinat Mehdiabadi, S; Safarzadeh, B; Zeinali, M; Abbrescia, M; Barbone, L; Calabria, C; Chhibra, S S; Colaleo, A; Creanza, D; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Maggi, M; Marangelli, B; My, S; Nuzzo, S; Pacifico, N; Pompili, A; Pugliese, G; Selvaggi, G; Silvestris, L; Singh, G; Venditti, R; Zito, G; Abbiendi, G; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Capiluppi, P; Castro, A; Cavallo, F R; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Meneghelli, M; Montanari, A; Navarria, F L; Odorici, F; Perrotta, A; Primavera, F; Rossi, A M; Rovelli, T; Siroli, G P; Travaglini, R; Albergo, S; Cappello, G; Chiorboli, M; Costa, S; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Gonzi, S; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bianco, S; Colafranceschi, S; Fabbri, F; Piccolo, D; Fabbricatore, P; Musenich, R; Tosi, S; Benaglia, A; De Guio, F; Di Matteo, L; Fiorendi, S; Gennai, S; Ghezzi, A; Malvezzi, S; Manzoni, R A; Martelli, A; Massironi, A; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; Sala, S; Tabarelli de Fatis, T; Buontempo, S; Carrillo Montoya, C A; Cavallo, N; De Cosa, A; Dogangun, O; Fabozzi, F; Iorio, A O M; Lista, L; Meola, S; Merola, M; Paolucci, P; Azzi, P; Bacchetta, N; Bisello, D; Branca, A; Carlin, R; Checchia, P; Dorigo, T; Dosselli, U; Gasparini, F; Gasparini, U; Gozzelino, A; Kanishchev, K; Lacaprara, S; Lazzizzera, I; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Simonetto, F; Torassa, E; Tosi, M; Vanini, S; Zotto, P; Zumerle, G; Gabusi, M; Ratti, S P; Riccardi, C; Torre, P; Vitulo, P; Biasini, M; Bilei, G M; Fanò, L; Lariccia, P; Mantovani, G; Menichelli, M; Nappi, A; Romeo, F; Saha, A; Santocchia, A; Spiezia, A; Taroni, S; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Broccolo, G; Castaldi, R; D'Agnolo, R T; Dell'Orso, R; Fiori, F; Foà, L; Giassi, A; Kraan, A; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Palla, F; Rizzi, A; Serban, A T; Spagnolo, P; Squillacioti, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; Del Re, D; Diemoz, M; Fanelli, C; Grassi, M; Longo, E; Meridiani, P; Micheli, F; Nourbakhsh, S; Organtini, G; Paramatti, R; Rahatlou, S; Sigamani, M; Soffi, L; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Biino, C; Cartiglia, N; Costa, M; Demaria, N; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Musich, M; Obertino, M M; Pastrone, N; Pelliccioni, M; Potenza, A; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Vilela Pereira, A; Belforte, S; Candelise, V; Casarsa, M; Cossutti, F; Della Ricca, G; Gobbo, B; Marone, M; Montanino, D; Penzo, A; Schizzi, A; Heo, S G; Kim, T Y; Nam, S K; Chang, S; Kim, D H; Kim, G N; Kong, D J; Park, H; Ro, S R; Son, D C; Son, T; Kim, J Y; Kim, Zero J; Song, S; Choi, S; Gyun, D; Hong, B; Jo, M; Kim, H; Kim, T J; Lee, K S; Moon, D H; Park, S K; Choi, M; Kim, J H; Park, C; Park, I C; Park, S; Ryu, G; Cho, Y; Choi, Y; Choi, Y K; Goh, J; Kim, M S; Kwon, E; Lee, B; Lee, J; Lee, S; Seo, H; Yu, I; Bilinskas, M J; Grigelionis, I; Janulis, M; Juodagalvis, A; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-de La Cruz, I; Lopez-Fernandez, R; Magaña Villalba, R; Martínez-Ortega, J; Sanchez-Hernandez, A; Villasenor-Cendejas, L M; Carrillo Moreno, S; Vazquez Valencia, F; Salazar Ibarguen, H A; Casimiro Linares, E; Morelos Pineda, A; Reyes-Santos, M A; Krofcheck, D; Bell, A J; Butler, P H; Doesburg, R; Reucroft, S; Silverwood, H; Ahmad, M; Ansari, M H; Asghar, M I; Butt, J; Hoorani, H R; Khalid, S; Khan, W A; Khurshid, T; Qazi, S; Shah, M A; Shoaib, M; Bialkowska, H; Boimska, B; Frueboes, T; Gokieli, R; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Wrochna, G; Zalewski, P; Brona, G; Bunkowski, K; Cwiok, M; Dominik, W; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Almeida, N; Bargassa, P; David, A; Faccioli, P; Ferreira Parracho, P G; Gallinaro, M; Seixas, J; Varela, J; Vischia, P; Belotelov, I; Bunin, P; Golutvin, I; Karjavin, V; Konoplyanikov, V; Kozlov, G; Lanev, A; Malakhov, A; Moisenz, P; Palichik, V; Perelygin, V; Savina, M; Shmatov, S; Shulha, S; Smirnov, V; Volodko, A; Zarubin, A; Evstyukhin, S; Golovtsov, V; Ivanov, Y; Kim, V; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Vorobyev, An; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Matveev, V; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Erofeeva, M; Gavrilov, V; Kossov, M; Lychkovskaya, N; Popov, V; Safronov, G; Semenov, S; Stolin, V; Vlasov, E; Zhokin, A; Belyaev, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Markina, A; Obraztsov, S; Perfilov, M; Petrushanko, S; Popov, A; Sarycheva, L; Savrin, V; Snigirev, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Mesyats, G; Rusakov, S V; Vinogradov, A; Azhgirey, I; Bayshev, I; Bitioukov, S; Grishin, V; Kachanov, V; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Ekmedzic, M; Krpic, D; Milosevic, J; Aguilar-Benitez, M; Alcaraz Maestre, J; Arce, P; Battilana, C; Calvo, E; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Domínguez Vázquez, D; Fernandez Bedoya, C; Fernández Ramos, J P; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Merino, G; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Santaolalla, J; Soares, M S; Willmott, C; Albajar, C; Codispoti, G; de Trocóniz, J F; Brun, H; Cuevas, J; Fernandez Menendez, J; Folgueras, S; Gonzalez Caballero, I; Lloret Iglesias, L; Piedra Gomez, J; Brochero Cifuentes, J A; Cabrillo, I J; Calderon, A; Chuang, S H; Duarte Campderros, J; Felcini, M; Fernandez, M; Gomez, G; Gonzalez Sanchez, J; Graziano, A; Jorda, C; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Matorras, F; Munoz Sanchez, F J; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Bachtis, M; Baillon, P; Ball, A H; Barney, D; Benitez, J F; Bernet, C; Bianchi, G; Bloch, P; Bocci, A; Bonato, A; Botta, C; Breuker, H; Camporesi, T; Cerminara, G; Christiansen, T; Coarasa Perez, J A; D'Enterria, D; Dabrowski, A; De Roeck, A; Di Guida, S; Dobson, M; Dupont-Sagorin, N; Elliott-Peisert, A; Frisch, B; Funk, W; Georgiou, G; Giffels, M; Gigi, D; Gill, K; Giordano, D; Girone, M; Giunta, M; Glege, F; Gomez-Reino Garrido, R; Govoni, P; Gowdy, S; Guida, R; Hansen, M; Harris, P; Hartl, C; Harvey, J; Hegner, B; Hinzmann, A; Innocente, V; Janot, P; Kaadze, K; Karavakis, E; Kousouris, K; Lecoq, P; Lee, Y-J; Lenzi, P; Lourenço, C; Magini, N; Mäki, T; Malberti, M; Malgeri, L; Mannelli, M; Masetti, L; Meijers, F; Mersi, S; Meschi, E; Moser, R; Mozer, M U; Mulders, M; Musella, P; Nesvold, E; Orimoto, T; Orsini, L; Palencia Cortezon, E; Perez, E; Perrozzi, L; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Piparo, D; Polese, G; Quertenmont, L; Racz, A; Reece, W; Rodrigues Antunes, J; Rolandi, G; Rovelli, C; Rovere, M; Sakulin, H; Santanastasio, F; Schäfer, C; Schwick, C; Segoni, I; Sekmen, S; Sharma, A; Siegrist, P; Silva, P; Simon, M; Sphicas, P; Spiga, D; Tsirou, A; Veres, G I; Vlimant, J R; Wöhri, H K; Worm, S D; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Meier, F; Renker, D; Rohe, T; Bäni, L; Bortignon, P; Buchmann, M A; Casal, B; Chanon, N; Deisher, A; Dissertori, G; Dittmar, M; Donegà, M; Dünser, M; Eugster, J; Freudenreich, K; Grab, C; Hits, D; Lecomte, P; Lustermann, W; Marini, A C; Martinez Ruiz Del Arbol, P; Mohr, N; Moortgat, F; Nägeli, C; Nef, P; Nessi-Tedaldi, F; Pandolfi, F; Pape, L; Pauss, F; Peruzzi, M; Ronga, F J; Rossini, M; Sala, L; Sanchez, A K; Starodumov, A; Stieger, B; Takahashi, M; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Urscheler, C; Wallny, R; Weber, H A; Wehrli, L; Amsler, C; Chiochia, V; De Visscher, S; Favaro, C; Ivova Rikova, M; Millan Mejias, B; Otiougova, P; Robmann, P; Snoek, H; Tupputi, S; Verzetti, M; Chang, Y H; Chen, K H; Kuo, C M; Li, S W; Lin, W; Liu, Z K; Lu, Y J; Mekterovic, D; Singh, A P; Volpe, R; Yu, S S; Bartalini, P; Chang, P; Chang, Y H; Chang, Y W; Chao, Y; Chen, K F; Dietz, C; Grundler, U; Hou, W-S; Hsiung, Y; Kao, K Y; Lei, Y J; Lu, R-S; Majumder, D; Petrakou, E; Shi, X; Shiu, J G; Tzeng, Y M; Wan, X; Wang, M; Asavapibhop, B; Srimanobhas, N; Adiguzel, A; Bakirci, M N; Cerci, S; Dozen, C; Dumanoglu, I; Eskut, E; Girgis, S; Gokbulut, G; Gurpinar, E; Hos, I; Kangal, E E; Karaman, T; Karapinar, G; Kayis Topaksu, A; Onengut, G; Ozdemir, K; Ozturk, S; Polatoz, A; Sogut, K; Sunar Cerci, D; Tali, B; Topakli, H; Vergili, L N; Vergili, M; Akin, I V; Aliev, T; Bilin, B; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Ocalan, K; Ozpineci, A; Serin, M; Sever, R; Surat, U E; Yalvac, M; Yildirim, E; Zeyrek, M; Gülmez, E; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Cankocak, K; Levchuk, L; Brooke, J J; Clement, E; Cussans, D; Flacher, H; Frazier, R; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Kreczko, L; Metson, S; Newbold, D M; Nirunpong, K; Poll, A; Senkin, S; Smith, V J; Williams, T; Basso, L; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Jackson, J; Kennedy, B W; Olaiya, E; Petyt, D; Radburn-Smith, B C; Shepherd-Themistocleous, C H; Tomalin, I R; Womersley, W J; Bainbridge, R; Ball, G; Beuselinck, R; Buchmuller, O; Colling, D; Cripps, N; Cutajar, M; Dauncey, P; Davies, G; Della Negra, M; Ferguson, W; Fulcher, J; Futyan, D; Gilbert, A; Guneratne Bryer, A; Hall, G; Hatherell, Z; Hays, J; Iles, G; Jarvis, M; Karapostoli, G; Lyons, L; Magnan, A-M; Marrouche, J; Mathias, B; Nandi, R; Nash, J; Nikitenko, A; Papageorgiou, A; Pela, J; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rogerson, S; Rose, A; Ryan, M J; Seez, C; Sharp, P; Sparrow, A; Stoye, M; Tapper, A; Vazquez Acosta, M; Virdee, T; Wakefield, S; Wardle, N; Whyntie, T; Chadwick, M; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leggat, D; Leslie, D; Martin, W; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Hatakeyama, K; Liu, H; Scarborough, T; Charaf, O; Henderson, C; Rumerio, P; Avetisyan, A; Bose, T; Fantasia, C; Heister, A; John, J St; Lawson, P; Lazic, D; Rohlf, J; Sperka, D; Sulak, L; Alimena, J; Bhattacharya, S; Cutts, D; Demiragli, Z; Ferapontov, A; Garabedian, A; Heintz, U; Jabeen, S; Kukartsev, G; Laird, E; Landsberg, G; Luk, M; Narain, M; Nguyen, D; Segala, M; Sinthuprasith, T; Speer, T; Tsang, K V; Breedon, R; Breto, G; Calderon De La Barca Sanchez, M; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Dolen, J; Erbacher, R; Gardner, M; Houtz, R; Ko, W; Kopecky, A; Lander, R; Mall, O; Miceli, T; Pellett, D; Ricci-Tam, F; Rutherford, B; Searle, M; Smith, J; Squires, M; Tripathi, M; Vasquez Sierra, R; Yohay, R; Andreev, V; Cline, D; Cousins, R; Duris, J; Erhan, S; Everaerts, P; Farrell, C; Hauser, J; Ignatenko, M; Jarvis, C; Plager, C; Rakness, G; Schlein, P; Traczyk, P; Valuev, V; Weber, M; Babb, J; Clare, R; Dinardo, M E; Ellison, J; Gary, J W; Giordano, F; Hanson, G; Jeng, G Y; Liu, H; Long, O R; Luthra, A; Nguyen, H; Paramesvaran, S; Sturdy, J; Sumowidagdo, S; Wilken, R; Wimpenny, S; Andrews, W; Branson, J G; Cerati, G B; Cittolin, S; Evans, D; Golf, F; Holzner, A; Kelley, R; Lebourgeois, M; Letts, J; Macneill, I; Mangano, B; Padhi, S; Palmer, C; Petrucciani, G; Pieri, M; Sani, M; Sharma, V; Simon, S; Sudano, E; Tadel, M; Tu, Y; Vartak, A; Wasserbaech, S; Würthwein, F; Yagil, A; Yoo, J; Barge, D; Bellan, R; Campagnari, C; D'Alfonso, M; Danielson, T; Flowers, K; Geffert, P; Incandela, J; Justus, C; Kalavase, P; Koay, S A; Kovalskyi, D; Krutelyov, V; Lowette, S; Mccoll, N; Pavlunin, V; Rebassoo, F; Ribnik, J; Richman, J; Rossin, R; Stuart, D; To, W; West, C; Apresyan, A; Bornheim, A; Chen, Y; Di Marco, E; Duarte, J; Gataullin, M; Ma, Y; Mott, A; Newman, H B; Rogan, C; Spiropulu, M; Timciuc, V; Veverka, J; Wilkinson, R; Xie, S; Yang, Y; Zhu, R Y; Akgun, B; Azzolini, V; Calamba, A; Carroll, R; Ferguson, T; Iiyama, Y; Jang, D W; Liu, Y F; Paulini, M; Vogel, H; Vorobiev, I; Cumalat, J P; Drell, B R; Ford, W T; Gaz, A; Luiggi Lopez, E; Smith, J G; Stenson, K; Ulmer, K A; Wagner, S R; Alexander, J; Chatterjee, A; Eggert, N; Gibbons, L K; Heltsley, B; Khukhunaishvili, A; Kreis, B; Mirman, N; Nicolas Kaufman, G; Patterson, J R; Ryd, A; Salvati, E; Sun, W; Teo, W D; Thom, J; Thompson, J; Tucker, J; Vaughan, J; Weng, Y; Winstrom, L; Wittich, P; Winn, D; Abdullin, S; Albrow, M; Anderson, J; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bloch, I; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Elvira, V D; Fisk, I; Freeman, J; Gao, Y; Green, D; Gutsche, O; Hanlon, J; Harris, R M; Hirschauer, J; Hooberman, B; Jindariani, S; Johnson, M; Joshi, U; Kilminster, B; Klima, B; Kunori, S; Kwan, S; Leonidopoulos, C; Linacre, J; Lincoln, D; Lipton, R; Lykken, J; Maeshima, K; Marraffino, J M; Maruyama, S; Mason, D; McBride, P; Mishra, K; Mrenna, S; Musienko, Y; Newman-Holmes, C; O'Dell, V; Prokofyev, O; Sexton-Kennedy, E; Sharma, S; Spalding, W J; Spiegel, L; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wu, W; Yang, F; Yumiceva, F; Yun, J C; Acosta, D; Avery, P; Bourilkov, D; Chen, M; Cheng, T; Das, S; De Gruttola, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fisher, M; Fu, Y; Furic, I K; Gartner, J; Hugon, J; Kim, B; Konigsberg, J; Korytov, A; Kropivnitskaya, A; Kypreos, T; Low, J F; Matchev, K; Milenovic, P; Mitselmakher, G; Muniz, L; Park, M; Remington, R; Rinkevicius, A; Sellers, P; Skhirtladze, N; Snowball, M; Yelton, J; Zakaria, M; Gaultney, V; Hewamanage, S; Lebolo, L M; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, T; Askew, A; Bochenek, J; Chen, J; Diamond, B; Gleyzer, S V; Haas, J; Hagopian, S; Hagopian, V; Jenkins, M; Johnson, K F; Prosper, H; Veeraraghavan, V; Weinberg, M; Baarmand, M M; Dorney, B; Hohlmann, M; Kalakhety, H; Vodopiyanov, I; Adams, M R; Anghel, I M; Apanasevich, L; Bai, Y; Bazterra, V E; Betts, R R; Bucinskaite, I; Callner, J; Cavanaugh, R; Evdokimov, O; Gauthier, L; Gerber, C E; Hofman, D J; Khalatyan, S; Lacroix, F; Malek, M; O'Brien, C; Silkworth, C; Strom, D; Turner, P; Varelas, N; Akgun, U; Albayrak, E A; Bilki, B; Clarida, W; Duru, F; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Newsom, C R; Norbeck, E; Onel, Y; Ozok, F; Sen, S; Tan, P; Tiras, E; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bolognesi, S; Fehling, D; Giurgiu, G; Gritsan, A V; Guo, Z J; Hu, G; Maksimovic, P; Rappoccio, S; Swartz, M; Whitbeck, A; Baringer, P; Bean, A; Benelli, G; Kenny Iii, R P; Murray, M; Noonan, D; Sanders, S; Stringer, R; Tinti, G; Wood, J S; Zhukova, V; Barfuss, A F; Bolton, T; Chakaberia, I; Ivanov, A; Khalil, S; Makouski, M; Maravin, Y; Shrestha, S; Svintradze, I; Gronberg, J; Lange, D; Wright, D; Baden, A; Boutemeur, M; Calvert, B; Eno, S C; Gomez, J A; Hadley, N J; Kellogg, R G; Kirn, M; Kolberg, T; Lu, Y; Marionneau, M; Mignerey, A C; Pedro, K; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Twedt, E; Apyan, A; Bauer, G; Bendavid, J; Busza, W; Butz, E; Cali, I A; Chan, M; Dutta, V; Gomez Ceballos, G; Goncharov, M; Hahn, K A; Kim, Y; Klute, M; Krajczar, K; Luckey, P D; Ma, T; Nahn, S; Paus, C; Ralph, D; Roland, C; Roland, G; Rudolph, M; Stephans, G S F; Stöckli, F; Sumorok, K; Sung, K; Velicanu, D; Wenger, E A; Wolf, R; Wyslouch, B; Yang, M; Yilmaz, Y; Yoon, A S; Zanetti, M; Cooper, S I; Dahmes, B; De Benedetti, A; Franzoni, G; Gude, A; Kao, S C; Klapoetke, K; Kubota, Y; Mans, J; Pastika, N; Rusack, R; Sasseville, M; Singovsky, A; Tambe, N; Turkewitz, J; Cremaldi, L M; Kroeger, R; Perera, L; Rahmat, R; Sanders, D A; Avdeeva, E; Bloom, K; Bose, S; Claes, D R; Dominguez, A; Eads, M; Keller, J; Kravchenko, I; Lazo-Flores, J; Malbouisson, H; Malik, S; Snow, G R; Godshalk, A; Iashvili, I; Jain, S; Kharchilava, A; Kumar, A; Alverson, G; Barberis, E; Baumgartel, D; Chasco, M; Haley, J; Nash, D; Trocino, D; Wood, D; Zhang, J; Anastassov, A; Kubik, A; Lusito, L; Mucia, N; Odell, N; Ofierzynski, R A; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Velasco, M; Won, S; Antonelli, L; Berry, D; Brinkerhoff, A; Chan, K M; Hildreth, M; Jessop, C; Karmgard, D J; Kolb, J; Lannon, K; Luo, W; Lynch, S; Marinelli, N; Morse, D M; Pearson, T; Planer, M; Ruchti, R; Slaunwhite, J; Valls, N; Wayne, M; Wolf, M; Bylsma, B; Durkin, L S; Hill, C; Hughes, R; Kotov, K; Ling, T Y; Puigh, D; Rodenburg, M; Vuosalo, C; Williams, G; Winer, B L; Adam, N; Berry, E; Elmer, P; Gerbaudo, D; Halyo, V; Hebda, P; Hegeman, J; Hunt, A; Jindal, P; Lopes Pegna, D; Lujan, P; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Quan, X; Raval, A; Safdi, B; Saka, H; Stickland, D; Tully, C; Werner, J S; Zuranski, A; Brownson, E; Lopez, A; Mendez, H; Ramirez Vargas, J E; Alagoz, E; Barnes, V E; Benedetti, D; Bolla, G; Bortoletto, D; De Mattia, M; Everett, A; Hu, Z; Jones, M; Koybasi, O; Kress, M; Laasanen, A T; Leonardo, N; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Shipsey, I; Silvers, D; Svyatkovskiy, A; Vidal Marono, M; Yoo, H D; Zablocki, J; Zheng, Y; Guragain, S; Parashar, N; Adair, A; Boulahouache, C; Ecklund, K M; Geurts, F J M; Li, W; Padley, B P; Redjimi, R; Roberts, J; Zabel, J; Betchart, B; Bodek, A; Chung, Y S; Covarelli, R; de Barbaro, P; Demina, R; Eshaq, Y; Ferbel, T; Garcia-Bellido, A; Goldenzweig, P; Han, J; Harel, A; Miner, D C; Vishnevskiy, D; Zielinski, M; Bhatti, A; Ciesielski, R; Demortier, L; Goulianos, K; Lungu, G; Malik, S; Mesropian, C; Arora, S; Barker, A; Chou, J P; Contreras-Campana, C; Contreras-Campana, E; Duggan, D; Ferencek, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Lath, A; Panwalkar, S; Park, M; Patel, R; Rekovic, V; Robles, J; Rose, K; Salur, S; Schnetzer, S; Seitz, C; Somalwar, S; Stone, R; Thomas, S; Walker, M; Cerizza, G; Hollingsworth, M; Spanier, S; Yang, Z C; York, A; Eusebi, R; Flanagan, W; Gilmore, J; Kamon, T; Khotilovich, V; Montalvo, R; Osipenkov, I; Pakhotin, Y; Perloff, A; Roe, J; Safonov, A; Sakuma, T; Sengupta, S; Suarez, I; Tatarinov, A; Toback, D; Akchurin, N; Damgov, J; Dragoiu, C; Dudero, P R; Jeong, C; Kovitanggoon, K; Lee, S W; Libeiro, T; Roh, Y; Volobouev, I; Appelt, E; Delannoy, A G; Florez, C; Greene, S; Gurrola, A; Johns, W; Kurt, P; Maguire, C; Melo, A; Sharma, M; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Arenton, M W; Balazs, M; Boutle, S; Cox, B; Francis, B; Goodell, J; Hirosky, R; Ledovskoy, A; Lin, C; Neu, C; Wood, J; Gollapinni, S; Harr, R; Karchin, P E; Kottachchi Kankanamge Don, C; Lamichhane, P; Sakharov, A; Anderson, M; Belknap, D A; Borrello, L; Carlsmith, D; Cepeda, M; Dasu, S; Friis, E; Gray, L; Grogg, K S; Grothe, M; Hall-Wilton, R; Herndon, M; Hervé, A; Klabbers, P; Klukas, J; Lanaro, A; Lazaridis, C; Leonard, J; Loveless, R; Mohapatra, A; Ojalvo, I; Palmonari, F; Pierro, G A; Ross, I; Savin, A; Smith, W H; Swanson, J
Results are reported from a search for new physics processes in events containing a single isolated high-transverse-momentum lepton (electron or muon), energetic jets, and large missing transverse momentum. The analysis is based on a 4.98 fb -1 sample of proton-proton collisions at a center-of-mass energy of 7 TeV, obtained with the CMS detector at the LHC. Three separate background estimation methods, each relying primarily on control samples in the data, are applied to a range of signal regions, providing complementary approaches for estimating the background yields. The observed yields are consistent with the predicted standard model backgrounds. The results are interpreted in terms of limits on the parameter space for the constrained minimal supersymmetric extension of the standard model, as well as on cross sections for simplified models, which provide a generic description of the production and decay of new particles in specific, topology based final states. The online version of this article (doi:10.1140/epjc/s10052-013-2404-z) contains supplementary material, which is available to authorized users.
Picturing and modelling catchments by representative hillslopes
NASA Astrophysics Data System (ADS)
Loritz, Ralf; Hassler, Sibylle; Jackisch, Conrad; Zehe, Erwin
2016-04-01
Hydrological modelling studies often start with a qualitative sketch of the hydrological processes of a catchment. These so-called perceptual models are often pictured as hillslopes and are generalizations displaying only the dominant and relevant processes of a catchment or hillslope. The problem with these models is that they are prone to become too much predetermined by the designer's background and experience. Moreover it is difficult to know if that picture is correct and contains enough complexity to represent the system under study. Nevertheless, because of their qualitative form, perceptual models are easy to understand and can be an excellent tool for multidisciplinary exchange between researchers with different backgrounds, helping to identify the dominant structures and processes in a catchment. In our study we explore whether a perceptual model built upon an intensive field campaign may serve as a blueprint for setting up representative hillslopes in a hydrological model to reproduce the functioning of two distinctly different catchments. We use a physically-based 2D hillslope model which has proven capable to be driven by measured soil-hydrological parameters. A key asset of our approach is that the model structure itself remains a picture of the perceptual model, which is benchmarked against a) geo-physical images of the subsurface and b) observed dynamics of discharge, distributed state variables and fluxes (soil moisture, matric potential and sap flow). Within this approach we are able to set up two behavioral model structures which allow the simulation of the most important hydrological fluxes and state variables in good accordance with available observations within the 19.4 km2 large Colpach catchment and the 4.5 km2 large Wollefsbach catchment in Luxembourg without the necessity of calibration. This corroborates, contrary to the widespread opinion, that a) lower mesoscale catchments may be modelled by representative hillslopes and b) physically-based models can be parametrized based on comprehensive field data and a good perceptual model. Our results particularly indicate that the main challenge in understanding and modelling the seasonal water balance of a catchment is a proper representation of the phenological cycle of vegetation, not exclusively the structure of the subsurface and spatial variability of soil hydraulic parameters.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.
2015-01-01
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745
Supply Chain Engineering and the Use of a Supporting Knowledge Management Application
NASA Astrophysics Data System (ADS)
Laakmann, Frank
The future competition in markets will happen between logistics networks and no longer between enterprises. A new approach for supporting the engineering of logistics networks is developed by this research as a part of the Collaborative Research Centre (SFB) 559: "Modeling of Large Networks in Logistics" at the University of Dortmund together with the Fraunhofer-Institute of Material Flow and Logistics founded by Deutsche Forschungsgemeinschaft (DFG). Based on a reference model for logistics processes, the process chain model, a guideline for logistics engineers is developed to manage the different types of design tasks of logistics networks. The technical background of this solution is a collaborative knowledge management application. This paper will introduce how new Internet-based technologies support supply chain design projects.
Simulation of target interpretation based on infrared image features and psychology principle
NASA Astrophysics Data System (ADS)
Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping
2009-07-01
It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.
Improving Earth/Prediction Models to Improve Network Processing
NASA Astrophysics Data System (ADS)
Wagner, G. S.
2017-12-01
The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Single neutral pion production by charged-current $$\\bar{\
Le, T.; Paomino, J. L.; Aliaga, L.; ...
2015-10-07
We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.
Single neutral pion production by charged-current $$\\bar{\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, T.; Paomino, J. L.; Aliaga, L.
We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Regulski, Krzysztof
2016-08-01
We present a process of semantic meta-model development for data management in an adaptable multiscale modeling framework. The main problems in ontology design are discussed, and a solution achieved as a result of the research is presented. The main concepts concerning the application and data management background for multiscale modeling were derived from the AM3 approach—object-oriented Agile multiscale modeling methodology. The ontological description of multiscale models enables validation of semantic correctness of data interchange between submodels. We also present a possibility of using the ontological model as a supervisor in conjunction with a multiscale model controller and a knowledge base system. Multiscale modeling formal ontology (MMFO), designed for describing multiscale models' data and structures, is presented. A need for applying meta-ontology in the MMFO development process is discussed. Examples of MMFO application in describing thermo-mechanical treatment of metal alloys are discussed. Present and future applications of MMFO are described.
Meesters, Johannes A J; Koelmans, Albert A; Quik, Joris T K; Hendriks, A Jan; van de Meent, Dik
2014-05-20
Screening level models for environmental assessment of engineered nanoparticles (ENP) are not generally available. Here, we present SimpleBox4Nano (SB4N) as the first model of this type, assess its validity, and evaluate it by comparisons with a known material flow model. SB4N expresses ENP transport and concentrations in and across air, rain, surface waters, soil, and sediment, accounting for nanospecific processes such as aggregation, attachment, and dissolution. The model solves simultaneous mass balance equations (MBE) using simple matrix algebra. The MBEs link all concentrations and transfer processes using first-order rate constants for all processes known to be relevant for ENPs. The first-order rate constants are obtained from the literature. The output of SB4N is mass concentrations of ENPs as free dispersive species, heteroaggregates with natural colloids, and larger natural particles in each compartment in time and at steady state. Known scenario studies for Switzerland were used to demonstrate the impact of the transport processes included in SB4N on the prediction of environmental concentrations. We argue that SB4N-predicted environmental concentrations are useful as background concentrations in environmental risk assessment.
Planning for Excellence: A Case Model in a Large Urban Community College District.
ERIC Educational Resources Information Center
Brown, Grace Carolyn
The strategy-oriented planning process used at Cuyahoga Community College (CCC) is described in this paper. After providing background on CCC, its enrollments, service area, annual fiscal responsibility, and long commitment to educational planning, the paper identifies five key areas for planning in the coming decades: (1) technological…
Personal and Situational Predictors of Test Anxiety of Students in Post-Compulsory Education
ERIC Educational Resources Information Center
Putwain, David W.; Woods, Kevin A.; Symes, Wendy
2010-01-01
Background: Recent models of evaluation anxiety emphasize the importance of personal knowledge and self-regulatory processes in the development of test anxiety, but do not theorize a route for situational influences. Aim: To investigate the relationship between test anxiety and personal knowledge beliefs (achievement goals and perceived academic…
Cultural Adaptation of the Strengthening Families Program 10-14 to Italian Families
ERIC Educational Resources Information Center
Ortega, Enrique; Giannotta, Fabrizia; Latina, Delia; Ciairano, Silvia
2012-01-01
Background: The family context has proven to be a useful target in which to apply prevention efforts aimed at child and adolescent health risk behaviors. There are currently a variety of cultural adaptation models that serve to guide the international adaptation of intervention programs. Objective: The cultural adaptation process and program…
Instructional Technology Professional Development Evaluation: Developing a High Quality Model
ERIC Educational Resources Information Center
Gaytan, Jorge A.; McEwen, Beryl C.
2010-01-01
Background: The literature contains very few studies that focused on evaluating the impact of professional development activities on student learning. And, many of these studies failed to determine whether the professional development activities met their primary goal--to improve the learning process. Purpose: The purpose of this study was to use…
Research and Development: A Complex Relationship Part I [and] Part II.
ERIC Educational Resources Information Center
Pollard, John Douglas Edward
Part 1 of this document describes the background, format, and early groundwork that went into the development of a test sponsored entirely by private enterprise. The discipline imposed by a financial bottom line imposes special pressures but also offers new opportunities. This private enterprise model is a multi-constructional process where…
Goal-Prioritization for Teachers, Coaches, and Students: A Developmental Model
ERIC Educational Resources Information Center
Symonds, Matthew L.; Tapps, Tyler
2016-01-01
The objective of this article is to provide background on types of goals, a system for writing goals, and a framework for goal-prioritization that can be implemented in classroom and/or sport settings. Goal-setting is the process of developing a desired outcome to serve as the purpose of one's actions.
Annual Research Review: What is Resilience within the Social Ecology of Human Development?
ERIC Educational Resources Information Center
Ungar, Michael; Ghazinour, Mehdi; Richter, Jorg
2013-01-01
Background: The development of Bronfenbrenner's bio-social-ecological systems model of human development parallels advances made to the theory of resilience that progressively moved from a more individual (micro) focus on traits to a multisystemic understanding of person-environment reciprocal processes. Methods: This review uses…
ERIC Educational Resources Information Center
Davis, Theresa M.
2013-01-01
Background: There is little evidence that technology acceptance is well understood in healthcare. The hospital environment is complex and dynamic creating a challenge when new technology is introduced because it impacts current processes and workflows which can significantly affect patient care delivery and outcomes. This study tested the effect…
Distance Learning in Clinical Transplantation: A Successful Model in Post-Graduate Education
ERIC Educational Resources Information Center
Halawa, Ahmed; Sharma, Ajay; Bridson, Julie M.; Lyon, Sarah; Prescott, Denise; Guha, Arpan; Taylor, David
2017-01-01
Background and Purpose: There are misconceptions among clinicians and educational bodies that online courses would not suit clinically orientated medical education, where bedside management and direct contact with real patients is the key to the learning process. Whereas, the proponents of online education believe that a well-designed and properly…
Modelling Adult Skills in OECD Countries
ERIC Educational Resources Information Center
Scandurra, Rosario; Calero, Jorge
2017-01-01
Research in the social sciences has focused extensively on the relationship between family background, educational attainment and social destination, on the one hand, and on the processes of skills creation and skills use, on the other. This paper brings these two branches of the literature together by examining the correlation between a range of…
Contracts, Choice, and Customer Service: Marketization and Public Engagement in Education
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Gold, Eva; Simon, Elaine
2011-01-01
Background/Context: Market models of school reform are having a major impact on school districts across the country. While scholars have examined many aspects of this process, we know far less about the general effects of marketization on public participation in education and local education politics. Purpose/Objective/Research Question/Focus of…
The Development of Reading Ability in Kindergarten. Technical Report No. 515.
ERIC Educational Resources Information Center
Meyer, Linda A.; And Others
A study was conducted to explore how children learn to read in kindergarten. The study employed a heuristic model that included entering ability, home background, instructional processes, home support for literacy development, and measures of student ability at the end of kindergarten. Children were tested, whole-day classroom observations were…
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boun...
Nagata, Naoki; Yamanaka, Shinya
2014-01-31
Induced pluripotent stem cell technology makes in vitro reprogramming of somatic cells from individuals with various genetic backgrounds possible. By applying this technology, it is possible to produce pluripotent stem cells from biopsy samples of arbitrarily selected individuals with various genetic backgrounds and to subsequently maintain, expand, and stock these cells. From these induced pluripotent stem cells, target cells and tissues can be generated after certain differentiation processes. These target cells/tissues are expected to be useful in regenerative medicine, disease modeling, drug screening, toxicology testing, and proof-of-concept studies in drug development. Therefore, the number of publications concerning induced pluripotent stem cells has recently been increasing rapidly, demonstrating that this technology has begun to infiltrate many aspects of stem cell biology and medical applications. In this review, we discuss the perspectives of induced pluripotent stem cell technology for modeling human diseases. In particular, we focus on the cloning event occurring through the reprogramming process and its ability to let us analyze the development of complex disease-harboring somatic mosaicism.
NASA Astrophysics Data System (ADS)
Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.
2017-10-01
The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.
Conceptual-level workflow modeling of scientific experiments using NMR as a case study
Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R
2007-01-01
Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870
Barker, P
2001-06-01
Nursing theories and nursing models have a low profile within psychiatric and mental health nursing in the United Kingdom. This paper describes the philosophical and theoretical background of the Tidal Model, which emerged from a 5-year study of the 'need for psychiatric nursing'. The Tidal Model extends and develops some of the traditional assumptions concerning the centrality of interpersonal relations within nursing practice. The model also integrates discrete processes for re-empowering the person who is disempowered by mental distress or psychiatric services or both. The paper reports briefly on the ongoing evaluation of the model in practice.
Measurement and reduction of low-level radon background in the KATRIN experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fränkle, F. M.
The KArlsruhe TRItium Neutrino (KATRIN) experiment is a next generation, model independent, large scale experiment to determine the mass of the electron anti-neutrino by investigating the kinematics of tritium beta decay with a sensitivity of 200 meV/c{sup 2}. The measurement setup consists of a high luminosity windowless gaseous molecular tritium source (WGTS), a differential and cryogenic pumped electron transport and tritium retention section, a tandem spectrometer section (pre-spectrometer and main spectrometer) for energy analysis, followed by a detector system for counting transmitted beta decay electrons. Measurements performed at the KATRIN pre-spectrometer test setup showed that the decay of radon (Rn)more » atoms in the volume of the KATRIN spectrometers is a major background source. Rn atoms from low-level radon emanation of materials inside the vacuum region of the KATRIN spectrometers are able to penetrate deep into the magnetic flux tube so that the alpha decay of Rn contributes to the background. Of particular importance are electrons emitted in processes accompanying the Rn alpha decay, such as shake-off, internal conversion of excited levels in the Rn daughter atoms and Auger electrons. Lowenergy electrons (< 100 eV) directly contribute to the background in the signal region. High-energy electrons can be stored magnetically inside the volume of the spectrometer and are able to create thousands of secondary electrons via subsequent ionization processes with residual gas molecules. In order to reduce the Rn induced background different active and passive counter measures were developed and tested. This proceeding will give an overview on Rn sources within the KATRIN spectrometer, describes how Rn decays inside the spectrometer produce background events at the detector and presents different counter measures to reduce the Rn induced background.« less
Measurement and reduction of low-level radon background in the KATRIN experiment
NASA Astrophysics Data System (ADS)
Fränkle, F. M.
2013-08-01
The KArlsruhe TRItium Neutrino (KATRIN) experiment is a next generation, model independent, large scale experiment to determine the mass of the electron anti-neutrino by investigating the kinematics of tritium beta decay with a sensitivity of 200 meV/c2. The measurement setup consists of a high luminosity windowless gaseous molecular tritium source (WGTS), a differential and cryogenic pumped electron transport and tritium retention section, a tandem spectrometer section (pre-spectrometer and main spectrometer) for energy analysis, followed by a detector system for counting transmitted beta decay electrons. Measurements performed at the KATRIN pre-spectrometer test setup showed that the decay of radon (Rn) atoms in the volume of the KATRIN spectrometers is a major background source. Rn atoms from low-level radon emanation of materials inside the vacuum region of the KATRIN spectrometers are able to penetrate deep into the magnetic flux tube so that the alpha decay of Rn contributes to the background. Of particular importance are electrons emitted in processes accompanying the Rn alpha decay, such as shake-off, internal conversion of excited levels in the Rn daughter atoms and Auger electrons. Lowenergy electrons (< 100 eV) directly contribute to the background in the signal region. High-energy electrons can be stored magnetically inside the volume of the spectrometer and are able to create thousands of secondary electrons via subsequent ionization processes with residual gas molecules. In order to reduce the Rn induced background different active and passive counter measures were developed and tested. This proceeding will give an overview on Rn sources within the KATRIN spectrometer, describes how Rn decays inside the spectrometer produce background events at the detector and presents different counter measures to reduce the Rn induced background.
A mechanistic model for the superglue fuming of latent fingerprints.
Czekanski, Patrick; Fasola, Michael; Allison, John
2006-11-01
The use of superglue vapors to detect latent fingerprints, known as superglue fuming, is a chemical process that has not been fully described. The role of the fingerprint material in the process, leading to formation of methyl cyanoacrylate polymer at the site of the fingerprint, remains to be established. Films of liquid alkanes respond similarly to actual fingerprints in the fuming experiment. Their responses depended on the hydrocarbon used, viscosity, and film thickness. Aspects such as film thickness appear to be relevant for actual fingerprints as well. A model was proposed in light of these observations. The model compares the process with gas chromatography, in which molecules partition between the gas phase and a stationary phase. Aspects such as accumulation of superglue monomers by partitioning into a thin film (or wax) are consistent with the preferential response of fingerprints on surfaces relative to the background.
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pangilinan, G.I.; Constantinou, C.P.; Gruzdkov, Y.A.
1996-07-01
Molecular processes associated with shock induced chemical decomposition of a mixture of nitromethane with ethylenediamine (0.1 wt%) are examined using time-resolved, Raman scattering. When shocked by stepwise loading to 14.2 GPa pressure, changes in the nitromethane vibrational modes and the spectral background characterize the onset of reaction. The CN stretch mode softens and disappears even as the NO{sub 2} and CH{sub 3} stretch modes, though modified, retain their identities. The shape and intensity of the spectral background also shows changes characteristic of reaction. Changes in the background, which are observed even at lower peak pressures of 11.4 GPa, are assignedmore » to luminescence from reaction intermediates. The implications of these results to various molecular models of sensitization are discussed.« less
Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu
2016-01-01
Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Observation of coherent elastic neutrino-nucleus scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimov, D.; Albert, J. B.; An, P.
The coherent elastic scattering of neutrinos off nuclei has eluded detection for four decades, even though its predicted cross section is by far the largest of all low-energy neutrino couplings. This mode of interaction offers new opportunities to study neutrino properties and leads to a miniaturization of detector size, with potential technological applications. In this paper, we observed this process at a 6.7σ confidence level, using a low-background, 14.6-kilogram CsI[Na] scintillator exposed to the neutrino emissions from the Spallation Neutron Source at Oak Ridge National Laboratory. Characteristic signatures in energy and time, predicted by the standard model for this process,more » were observed in high signal-to-background conditions. Finally, improved constraints on nonstandard neutrino interactions with quarks are derived from this initial data set.« less
Observation of coherent elastic neutrino-nucleus scattering
Akimov, D.; Albert, J. B.; An, P.; ...
2017-08-03
The coherent elastic scattering of neutrinos off nuclei has eluded detection for four decades, even though its predicted cross section is by far the largest of all low-energy neutrino couplings. This mode of interaction offers new opportunities to study neutrino properties and leads to a miniaturization of detector size, with potential technological applications. In this paper, we observed this process at a 6.7σ confidence level, using a low-background, 14.6-kilogram CsI[Na] scintillator exposed to the neutrino emissions from the Spallation Neutron Source at Oak Ridge National Laboratory. Characteristic signatures in energy and time, predicted by the standard model for this process,more » were observed in high signal-to-background conditions. Finally, improved constraints on nonstandard neutrino interactions with quarks are derived from this initial data set.« less
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.
Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R
2012-07-12
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pozzobon, Nicola; /Pisa U.
Studying WZ associated production at the Fermilab Tevatron Collider is of great importance for two main reasons. On the one hand, this process would be sensitive to anomalies in the triple gauge couplings such that any deviation from the value predicted by the Standard Model would be indicative of new physics. In addition, by choosing to focus on the final state where the Z boson decays to b{bar b} pairs, the event topology would be the same as expected for associated production of a W and a Standard Model light Higgs boson (m{sub H} {approx}< 135 GeV) which decays intomore » b{bar b} pairs most of times. The process WH {yields} W b{bar b} has an expected {sigma} {center_dot} B about five times lower than WZ {yields} Wb{bar b} for m{sub H} {approx_equal} 120 GeV. Therefore, observing this process would be a benchmark for an even more difficult search aiming at discovering the light Higgs in the WH {yields} Wb{bar b} process. After so many years of Tevatron operation only a weak WZ signal was recently observed in the full leptonic decay channel, which suffers from much less competition from background. Searching for the Z in the b{bar b} decay channel in this process is clearly a very challenging endeavour. In the work described in this thesis, WZ production is searched for in a final state where the W decays leptonically to an electron-neutrino pair or a muon-neutrino pair, with associated production of a jet pair consistent with Z decays. A set of candidate events is obtained by applying appropriate cuts to the parameters of events collected by wide acceptance leptonic triggers. To improve the signal fraction of the selected events, an algorithm was used to tag b-flavored jets by means of their content of long lived b-hadrons and corrections were developed to the jet algorithm to improve the b-jet energy resolution for a better reconstruction of the Z mass. In order to sense the presence of a signal one needs to estimate the amount of background. The relative content of heavy flavor jets in the dominant W+multijet background is assumed as predicted by theory. This technique was originally developed in CDF to measure the t{bar t} production cross section in the final state with W + 3 or more jets. This thesis was conceived as the first attempt within CDF to apply a customized version of it to look for evidence of diboson production in the final state with aW and two jets. Extracting the signal in this channel is very hard since with such a small number of jets the background is two orders of magnitude greater than the signal. Moreover, since the signal to background ratio is very small, the expected sensitivity depends critically on the theoretical uncertainties on the amount of background. While work is in progress to understand this background more reliably, this analysis provides an estimate of the achievable upper limit on the WZ production cross section.« less
Aharony, Noa
2006-12-01
The learning context is learning English in an Internet environment. The examination of this learning process was based on the Biggs and Moore's teaching-learning model (Biggs & Moore, 1993). The research aims to explore the use of the deep and surface strategies in an Internet environment among EFL students who come from different socio-economic backgrounds. The results of the research may add an additional level to the understanding of students' functioning in the Internet environment. One hundred fourty-eight Israeli junior and high school students participated in this research. The methodology was based on special computer software: Screen Cam, which recorded the students' learning process. In addition, expert judges completed a questionnaire which examined and categorized the students' learning strategies. The research findings show a clear preference of participants from all socio-economic backgrounds towards the surface learning strategy. The findings also showed that students from the medium to high socio-economic background used both learning strategies more frequently than low socio-economic students. The results reflect the habits that students acquire during their adjustment process throughout their education careers. A brief encounter with the Internet learning environment apparently cannot change norms or habits, which were acquired in the non-Internet learning environment.
Summary Reporting for a Linked Interaction Design-Scrum Approach: How Much Modeling Is Useful?
NASA Astrophysics Data System (ADS)
Keenan, Frank; Damdul, Namgyal; Kelly, Sandra; Connolly, David
Identifying the minimum beneficial modeling to support an agile development team is crucial. Often, story cards arranged on wall charts or spontaneously drawn diagrams provide sufficient detail to allow a team to understand an emerging problem. However, what is beneficial when a new stakeholder joins a team after development has commenced and needs to have project background and progress reported? This poster reports on the models produced by a process combining aspects of Interaction Design (ID) and Scrum for internet development in such a scenario.
A weather-driven model of malaria transmission
Hoshen, Moshe B; Morse, Andrew P
2004-01-01
Background Climate is a major driving force behind malaria transmission and climate data are often used to account for the spatial, seasonal and interannual variation in malaria transmission. Methods This paper describes a mathematical-biological model of the parasite dynamics, comprising both the weather-dependent within-vector stages and the weather-independent within-host stages. Results Numerical evaluations of the model in both time and space show that it qualitatively reconstructs the prevalence of infection. Conclusion A process-based modelling structure has been developed that may be suitable for the simulation of malaria forecasts based on seasonal weather forecasts. PMID:15350206
Exploring the Earth System through online interactive models
NASA Astrophysics Data System (ADS)
Coogan, L. A.
2013-12-01
Upper level Earth Science students commonly have a strong background of mathematical training from Math courses, however their ability to use mathematical models to solve Earth Science problems is commonly limited. Their difficulty comes, in part, because of the nature of the subject matter. There is a large body of background ';conceptual' and ';observational' understanding and knowledge required in the Earth Sciences before in-depth quantification becomes useful. For example, it is difficult to answer questions about geological processes until you can identify minerals and rocks and understand the general geodynamic implications of their associations. However, science is fundamentally quantitative. To become scientists students have to translate their conceptual understanding into quantifiable models. Thus, it is desirable for students to become comfortable with using mathematical models to test hypotheses. With the aim of helping to bridging the gap between conceptual understanding and quantification I have started to build an interactive teaching website based around quantitative models of Earth System processes. The site is aimed at upper-level undergraduate students and spans a range of topics that will continue to grow as time allows. The mathematical models are all built for the students, allowing them to spend their time thinking about how the ';model world' changes in response to their manipulation of the input variables. The web site is divided into broad topics or chapters (Background, Solid Earth, Ocean and Atmosphere, Earth history) and within each chapter there are different subtopic (e.g. Solid Earth: Core, Mantle, Crust) and in each of these individual webpages. Each webpage, or topic, starts with an introduction to the topic, followed by an interactive model that the students can use sliders to control the input to and watch how the results change. This interaction between student and model is guided by a series of multiple choice questions that the student answers and immediately gets feedback whether the answer is correct or not. This way the students can ensure they understand the concepts before moving on. A discussion forum for the students to discuss the topics is in development and each page has a feedback option to allow both numerical (1-10) and written feedback on how useful the webpage was. By the end of exploring any given process students are expected to understand how the different parameters explored by the model interact to control the results. They should appreciate why the controlling equations look the way they do (all equations needed to develop the models are present in the introduction) and how these interact to control the results. While this is no substitute to students undertaking the calculations for themselves this approach allows a much wider range of topics to be explored quantitatively than if the students have to code all models themselves.
Validating archetypes for the Multiple Sclerosis Functional Composite
2014-01-01
Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081
An integrated model of communication influence on beliefs
Eveland, William P.; Cooper, Kathryn E.
2013-01-01
How do people develop and maintain their beliefs about science? Decades of social science research exist to help us answer this question. The Integrated Model of Communication Influence on Beliefs presented here combines multiple theories that have considered aspects of this process into a comprehensive model to explain how individuals arrive at their scientific beliefs. In this article, we (i) summarize what is known about how science is presented in various news and entertainment media forms; (ii) describe how individuals differ in their choices to be exposed to various forms and sources of communication; (iii) discuss the implications of how individuals mentally process information on the effects of communication; (iv) consider how communication effects can be altered depending on background characteristics and motivations of individuals; and (v) emphasize that the process of belief formation is not unidirectional but rather, feeds back on itself over time. We conclude by applying the Integrated Model of Communication Influence on Beliefs to the complex issue of beliefs about climate change. PMID:23940328
An integrated model of communication influence on beliefs.
Eveland, William P; Cooper, Kathryn E
2013-08-20
How do people develop and maintain their beliefs about science? Decades of social science research exist to help us answer this question. The Integrated Model of Communication Influence on Beliefs presented here combines multiple theories that have considered aspects of this process into a comprehensive model to explain how individuals arrive at their scientific beliefs. In this article, we (i) summarize what is known about how science is presented in various news and entertainment media forms; (ii) describe how individuals differ in their choices to be exposed to various forms and sources of communication; (iii) discuss the implications of how individuals mentally process information on the effects of communication; (iv) consider how communication effects can be altered depending on background characteristics and motivations of individuals; and (v) emphasize that the process of belief formation is not unidirectional but rather, feeds back on itself over time. We conclude by applying the Integrated Model of Communication Influence on Beliefs to the complex issue of beliefs about climate change.
A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field
Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.
2010-01-01
Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646
Verbruggen, Heroen; Tyberghein, Lennert; Belton, Gareth S.; Mineur, Frederic; Jueterbock, Alexander; Hoarau, Galice; Gurgel, C. Frederico D.; De Clerck, Olivier
2013-01-01
The utility of species distribution models for applications in invasion and global change biology is critically dependent on their transferability between regions or points in time, respectively. We introduce two methods that aim to improve the transferability of presence-only models: density-based occurrence thinning and performance-based predictor selection. We evaluate the effect of these methods along with the impact of the choice of model complexity and geographic background on the transferability of a species distribution model between geographic regions. Our multifactorial experiment focuses on the notorious invasive seaweed Caulerpacylindracea (previously Caulerpa racemosa var. cylindracea ) and uses Maxent, a commonly used presence-only modeling technique. We show that model transferability is markedly improved by appropriate predictor selection, with occurrence thinning, model complexity and background choice having relatively minor effects. The data shows that, if available, occurrence records from the native and invaded regions should be combined as this leads to models with high predictive power while reducing the sensitivity to choices made in the modeling process. The inferred distribution model of Caulerpacylindracea shows the potential for this species to further spread along the coasts of Western Europe, western Africa and the south coast of Australia. PMID:23950789
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
Geant4 simulations of a wide-angle x-ray focusing telescope
NASA Astrophysics Data System (ADS)
Zhao, Donghua; Zhang, Chen; Yuan, Weimin; Zhang, Shuangnan; Willingale, Richard; Ling, Zhixing
2017-06-01
The rapid development of X-ray astronomy has been made possible by widely deploying X-ray focusing telescopes on board many X-ray satellites. Geant4 is a very powerful toolkit for Monte Carlo simulations and has remarkable abilities to model complex geometrical configurations. However, the library of physical processes available in Geant4 lacks a description of the reflection of X-ray photons at a grazing incident angle which is the core physical process in the simulation of X-ray focusing telescopes. The scattering of low-energy charged particles from the mirror surfaces is another noteworthy process which is not yet incorporated into Geant4. Here we describe a Monte Carlo model of a simplified wide-angle X-ray focusing telescope adopting lobster-eye optics and a silicon detector using the Geant4 toolkit. With this model, we simulate the X-ray tracing, proton scattering and background detection. We find that: (1) the effective area obtained using Geant4 is in agreement with that obtained using Q software with an average difference of less than 3%; (2) X-rays are the dominant background source below 10 keV; (3) the sensitivity of the telescope is better by at least one order of magnitude than that of a coded mask telescope with the same physical dimensions; (4) the number of protons passing through the optics and reaching the detector by Firsov scattering is about 2.5 times that of multiple scattering for the lobster-eye telescope.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Probst, Yasmine; Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh
2016-07-28
Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
2014-01-01
Background Interventions having a strong theoretical basis are more efficacious, providing a strong argument for incorporating theory into intervention planning. The objective of this study was to develop a conceptual model to facilitate the planning of dietary intervention strategies at the household level in rural Kerala. Methods Three focus group discussions and 17 individual interviews were conducted among men and women, aged between 23 and 75 years. An interview guide facilitated the process to understand: 1) feasibility and acceptability of a proposed dietary behaviour change intervention; 2) beliefs about foods, particularly fruits and vegetables; 3) decision-making in households with reference to food choices and access; and 4) to gain insights into the kind of intervention strategies that may be practical at community and household level. The data were analysed using a modified form of qualitative framework analysis, which combined both deductive and inductive reasoning. A priori themes were identified from relevant behaviour change theories using construct definitions, and used to index the meaning units identified from the primary qualitative data. In addition, new themes emerging from the data were included. The associations between the themes were mapped into four main factors and its components, which contributed to construction of the conceptual model. Results Thirteen of the a priori themes from three behaviour change theories (Trans-theoretical model, Health Belief model and Theory of Planned Behaviour) were confirmed or slightly modified, while four new themes emerged from the data. The conceptual model had four main factors and its components: impact factors (decisional balance, risk perception, attitude); change processes (action-oriented, cognitive); background factors (personal modifiers, societal norms); and overarching factors (accessibility, perceived needs and preferences), built around a three-stage change spiral (pre-contemplation, intention, action). Decisional balance was the strongest in terms of impacting the process of behaviour change, while household efficacy and perceived household cooperation were identified as ‘markers’ for stages-of-change at the household level. Conclusions This type of framework analysis made it possible to develop a conceptual model that could facilitate the design of intervention strategies to aid a household-level dietary behaviour change process. PMID:24912496
NASA Astrophysics Data System (ADS)
Dishaw, Adam; CMS Collaboration
2016-03-01
Results are presented from a search for supersymmetric particles in pp collisions in the final state with a single, high pT lepton; multiple jets, including at least one b-tagged jet; and large missing transverse momentum. The data sample corresponds to 2.1 fb-1 recorded by the CMS experiment at √{ s} = 13 TeV. The search focuses on processes leading to high jet multiplicities, such as the simplified model T1tttt corresponding to gg -> g g with g -> t t χ 10 . The quantity MJ, defined as the sum of the masses of the large-radius jets in the event, is used along with other kinematic variables to provide discrimination between signal and backgrounds and as a key part of the background estimation method. The observed event yields in data are consistent with those expected for standard model backgrounds. Gluinos with mass below 1575 GeV are excluded at 95 % CL for T1tttt scenarios with low χ 10 mass.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
Between Students' Instrumental Goals and How They Learn: Goal Content Is the Gap to Mind
ERIC Educational Resources Information Center
Fryer, Luke K.; Ginns, Paul; Walker, Richard
2014-01-01
Background: Experimental/correlational studies have consistently demonstrated that the contents of an individual's goals play an important role within future motivations, learning processes, and outcomes. Aims: The aim of the study was to extend past findings by employing a three-point, cross-lagged latent simultaneous structural model in the…
A Tactical-Game Approach and Enhancement of Metacognitive Behaviour in Elementary School Students
ERIC Educational Resources Information Center
Chatzipanteli, Athanasia; Digelidis, N.; Karatzoglidis, C.; Dean, R.
2016-01-01
Background: "Teaching games for understanding" (TGfU) is a tactical-game approach to teaching, in which participants are learning via the processes intrinsic to the games themselves. Purpose: The aim of the study was to examine the effectiveness of a tactical-game model in promoting metacognitive behaviour in elementary-school students.…
ERIC Educational Resources Information Center
Krain, Amy L.; Hefton, Sara; Pine, Daniel S.; Ernst, Monique; Castellanos, F. Xavier; Klein, Rachel G.; Milham, Michael P.
2006-01-01
Background: Maturation of prefrontal circuits during adolescence contributes to the development of cognitive processes such as decision-making. Recent theories suggest that these neural changes also play a role in the shift from generalized anxiety disorder (GAD) to depression that often occurs during this developmental period. Cognitive models of…
The Role of Perceptions of Friendships and Peers in Learning Skills in Physical Education
ERIC Educational Resources Information Center
Koekoek, Jeroen; Knoppers, Annelies
2015-01-01
Background: Most research on how children learn when using the Teaching Games for Understanding (TGfU) approach has focused on cognitive dimensions in teaching games models. A social constructivist perspective suggests, however, that learning also takes place during social interactions. Since the process of learning game skills tends to have a…
A strategy for the observation of volcanism on Earth from space.
Wadge, G
2003-01-15
Heat, strain, topography and atmospheric emissions associated with volcanism are well observed by satellites orbiting the Earth. Gravity and electromagnetic transients from volcanoes may also prove to be measurable from space. The nature of eruptions means that the best strategy for measuring their dynamic properties remotely from space is to employ two modes with different spatial and temporal samplings: eruption mode and background mode. Such observational programmes are best carried out at local or regional volcano observatories by coupling them with numerical models of volcanic processes. Eventually, such models could become multi-process, operational forecast models that assimilate the remote and other observables to constrain their uncertainties. The threat posed by very large magnitude explosive eruptions is global and best addressed by a spaceborne observational programme with a global remit.
Effect of genetic background on the dystrophic phenotype in mdx mice
Coley, William D.; Bogdanik, Laurent; Vila, Maria Candida; Yu, Qing; Van Der Meulen, Jack H.; Rayavarapu, Sree; Novak, James S.; Nearing, Marie; Quinn, James L.; Saunders, Allison; Dolan, Connor; Andrews, Whitney; Lammert, Catherine; Austin, Andrew; Partridge, Terence A.; Cox, Gregory A.; Lutz, Cathleen; Nagaraju, Kanneboyina
2016-01-01
Genetic background significantly affects phenotype in multiple mouse models of human diseases, including muscular dystrophy. This phenotypic variability is partly attributed to genetic modifiers that regulate the disease process. Studies have demonstrated that introduction of the γ-sarcoglycan-null allele onto the DBA/2J background confers a more severe muscular dystrophy phenotype than the original strain, demonstrating the presence of genetic modifier loci in the DBA/2J background. To characterize the phenotype of dystrophin deficiency on the DBA/2J background, we created and phenotyped DBA/2J-congenic Dmdmdx mice (D2-mdx) and compared them with the original, C57BL/10ScSn-Dmdmdx (B10-mdx) model. These strains were compared with their respective control strains at multiple time points between 6 and 52 weeks of age. Skeletal and cardiac muscle function, inflammation, regeneration, histology and biochemistry were characterized. We found that D2-mdx mice showed significantly reduced skeletal muscle function as early as 7 weeks and reduced cardiac function by 28 weeks, suggesting that the disease phenotype is more severe than in B10-mdx mice. In addition, D2-mdx mice showed fewer central myonuclei and increased calcifications in the skeletal muscle, heart and diaphragm at 7 weeks, suggesting that their pathology is different from the B10-mdx mice. The new D2-mdx model with an earlier onset and more pronounced dystrophy phenotype may be useful for evaluating therapies that target cardiac and skeletal muscle function in dystrophin-deficient mice. Our data align the D2-mdx with Duchenne muscular dystrophy patients with the LTBP4 genetic modifier, making it one of the few instances of cross-species genetic modifiers of monogenic traits. PMID:26566673
Fingerprint Liveness Detection in the Presence of Capable Intruders.
Sequeira, Ana F; Cardoso, Jaime S
2015-06-19
Fingerprint liveness detection methods have been developed as an attempt to overcome the vulnerability of fingerprint biometric systems to spoofing attacks. Traditional approaches have been quite optimistic about the behavior of the intruder assuming the use of a previously known material. This assumption has led to the use of supervised techniques to estimate the performance of the methods, using both live and spoof samples to train the predictive models and evaluate each type of fake samples individually. Additionally, the background was often included in the sample representation, completely distorting the decision process. Therefore, we propose that an automatic segmentation step should be performed to isolate the fingerprint from the background and truly decide on the liveness of the fingerprint and not on the characteristics of the background. Also, we argue that one cannot aim to model the fake samples completely since the material used by the intruder is unknown beforehand. We approach the design by modeling the distribution of the live samples and predicting as fake the samples very unlikely according to that model. Our experiments compare the performance of the supervised approaches with the semi-supervised ones that rely solely on the live samples. The results obtained differ from the ones obtained by the more standard approaches which reinforces our conviction that the results in the literature are misleadingly estimating the true vulnerability of the biometric system.
Fingerprint Liveness Detection in the Presence of Capable Intruders
Sequeira, Ana F.; Cardoso, Jaime S.
2015-01-01
Fingerprint liveness detection methods have been developed as an attempt to overcome the vulnerability of fingerprint biometric systems to spoofing attacks. Traditional approaches have been quite optimistic about the behavior of the intruder assuming the use of a previously known material. This assumption has led to the use of supervised techniques to estimate the performance of the methods, using both live and spoof samples to train the predictive models and evaluate each type of fake samples individually. Additionally, the background was often included in the sample representation, completely distorting the decision process. Therefore, we propose that an automatic segmentation step should be performed to isolate the fingerprint from the background and truly decide on the liveness of the fingerprint and not on the characteristics of the background. Also, we argue that one cannot aim to model the fake samples completely since the material used by the intruder is unknown beforehand. We approach the design by modeling the distribution of the live samples and predicting as fake the samples very unlikely according to that model. Our experiments compare the performance of the supervised approaches with the semi-supervised ones that rely solely on the live samples. The results obtained differ from the ones obtained by the more standard approaches which reinforces our conviction that the results in the literature are misleadingly estimating the true vulnerability of the biometric system. PMID:26102491
A Design Methodology for Medical Processes
Bonacina, Stefano; Pozzi, Giuseppe; Pinciroli, Francesco; Marceglia, Sara
2016-01-01
Summary Background Healthcare processes, especially those belonging to the clinical domain, are acknowledged as complex and characterized by the dynamic nature of the diagnosis, the variability of the decisions made by experts driven by their experiences, the local constraints, the patient’s needs, the uncertainty of the patient’s response, and the indeterminacy of patient’s compliance to treatment. Also, the multiple actors involved in patient’s care need clear and transparent communication to ensure care coordination. Objectives In this paper, we propose a methodology to model healthcare processes in order to break out complexity and provide transparency. Methods The model is grounded on a set of requirements that make the healthcare domain unique with respect to other knowledge domains. The modeling methodology is based on three main phases: the study of the environmental context, the conceptual modeling, and the logical modeling. Results The proposed methodology was validated by applying it to the case study of the rehabilitation process of stroke patients in the specific setting of a specialized rehabilitation center. The resulting model was used to define the specifications of a software artifact for the digital administration and collection of assessment tests that was also implemented. Conclusions Despite being only an example, our case study showed the ability of process modeling to answer the actual needs in healthcare practices. Independently from the medical domain in which the modeling effort is done, the proposed methodology is useful to create high-quality models, and to detect and take into account relevant and tricky situations that can occur during process execution. PMID:27081415
NASA Astrophysics Data System (ADS)
Yin, Xiufeng; Kang, Shichang; de Foy, Benjamin; Cong, Zhiyuan; Luo, Jiali; Zhang, Lang; Ma, Yaoming; Zhang, Guoshuai; Rupakheti, Dipesh; Zhang, Qianggong
2017-09-01
Ozone is an important pollutant and greenhouse gas, and tropospheric ozone variations are generally associated with both natural and anthropogenic processes. As one of the most pristine and inaccessible regions in the world, the Tibetan Plateau has been considered as an ideal region for studying processes of the background atmosphere. Due to the vast area of the Tibetan Plateau, sites in the southern, northern and central regions exhibit different patterns of variation in surface ozone. Here, we present continuous measurements of surface ozone mixing ratios at Nam Co Station over a period of ˜ 5 years (January 2011 to October 2015), which is a background site in the inland Tibetan Plateau. An average surface ozone mixing ratio of 47.6 ± 11.6 ppb (mean ± standard deviation) was recorded, and a large annual cycle was observed with maximum ozone mixing ratios in the spring and minimum ratios during the winter. The diurnal cycle is characterized by a minimum in the early morning and a maximum in the late afternoon. Nam Co Station represents a background region where surface ozone receives negligible local anthropogenic emissions inputs, and the anthropogenic contribution from South Asia in spring and China in summer may affect Nam Co Station occasionally. Surface ozone at Nam Co Station is mainly dominated by natural processes involving photochemical reactions, vertical mixing and downward transport of stratospheric air mass. Model results indicate that the study site is affected differently by the surrounding areas in different seasons: air masses from the southern Tibetan Plateau contribute to the high ozone levels in the spring, and enhanced ozone levels in the summer are associated with air masses from the northern Tibetan Plateau. By comparing measurements at Nam Co Station with those from other sites on the Tibetan Plateau, we aim to expand the understanding of ozone cycles and transport processes over the Tibetan Plateau. This work may provide a reference for future model simulations.
Hahn, Intaek; Wiener, Russell W; Richmond-Bryant, Jennifer; Brixey, Laurie A; Henkle, Stacy W
2009-12-01
The Brooklyn traffic real-time ambient pollutant penetration and environmental dispersion (B-TRAPPED) study was a multidisciplinary field research project that investigated the transport, dispersion, and infiltration processes of traffic emission particulate matter (PM) pollutants in a near-highway urban residential area. The urban PM transport, dispersion, and infiltration processes were described mathematically in a theoretical model that was constructed to develop the experimental objectives of the B-TRAPPED study. In the study, simultaneous and continuous time-series PM concentration and meteorological data collected at multiple outdoor and indoor monitoring locations were used to characterize both temporal and spatial patterns of the PM concentration movements within microscale distances (<500 m) from the highway. Objectives of the study included (1) characterizing the temporal and spatial PM concentration fluctuation and distribution patterns in the urban street canyon; (2) investigating the effects of urban structures such as a tall building or an intersection on the transport and dispersion of PM; (3) studying the influence of meteorological variables on the transport, dispersion, and infiltration processes; (4) characterizing the relationships between the building parameters and the infiltration mechanisms; (5) establishing a cause-and-effect relationship between outdoor-released PM and indoor PM concentrations and identifying the dominant mechanisms involved in the infiltration process; (6) evaluating the effectiveness of a shelter-in-place area for protection against outdoor-released PM pollutants; and (7) understanding the predominant airflow and pollutant dispersion patterns within the neighborhood using wind tunnel and CFD simulations. The 10 papers in this first set of papers presenting the results from the B-TRAPPED study address these objectives. This paper describes the theoretical background and models representing the interrelated processes of transport, dispersion, and infiltration. The theoretical solution for the relationship between the time-dependent indoor PM concentration and the initial PM concentration at the outdoor source was obtained. The theoretical models and solutions helped us to identify important parameters in the processes of transport, dispersion, and infiltration. The B-TRAPPED study field experiments were then designed to investigate these parameters in the hope of better understanding urban PM pollutant behaviors.
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
Youssim, Iaroslav; Hank, Karsten; Litwin, Howard
2014-01-01
Building on a tripartite model of capitals necessary to perform productive activities and on work suggesting that cumulative (dis-) advantage processes are important mechanisms for life-course inequalities, our study set out to investigate the potential role of family social background and inheritance in later-life volunteering. We hypothesized that older individuals who inherited work-relevant economic and cultural capitals from their family of origin are more likely to be engaged in voluntary activities than their counterparts with a less advantageous family social background. Our main findings from the analysis of a representative sample of community-dwelling Israelis aged 50 and over provide strong support for this hypothesis: the likelihood to volunteer is significantly higher among those who received substantial financial transfers from their family of origin (‘inherited economic capital’) and among those having a ‘white collar’ parental background (‘inherited cultural capital’). We conclude with perspectives for future research. PMID:25651548
Youssim, Iaroslav; Hank, Karsten; Litwin, Howard
2015-01-01
Building on a tripartite model of capitals necessary to perform productive activities and on work suggesting that cumulative (dis-)advantage processes are important mechanisms for life course inequalities, our study set out to investigate the potential role of family social background and inheritance in later life volunteering. We hypothesized that older individuals who inherited work-relevant economic and cultural capitals from their family of origin are more likely to be engaged in voluntary activities than their counterparts with a less advantageous family social background. Our main findings from the analysis of a representative sample of community-dwelling Israelis aged 50 and over provide strong support for this hypothesis: the likelihood to volunteer is significantly higher among those who received substantial financial transfers from their family of origin ("inherited economic capital") and among those having a "white collar" parental background ("inherited cultural capital"). We conclude with perspectives for future research. © The Author(s) 2014.
Secret loss of unitarity due to the classical background
NASA Astrophysics Data System (ADS)
Yang, I.-Sheng
2017-07-01
We show that a quantum subsystem can become significantly entangled with a classical background through a process with few or no semiclassical backreactions. We study two quantum harmonic oscillators coupled to each other in a time-independent Hamiltonian. We compare it to its semiclassical approximation in which one of the oscillators is treated as the classical background. In this approximation, the remaining quantum oscillator has an effective Hamiltonian which is time-dependent, and its evolution appears to be unitary. However, in the fully quantum model, the two oscillators can entangle each other. Thus, the unitarity of either individual oscillator is never guaranteed. We derive the critical time scale after which the unitarity of either individual oscillator is irrevocably lost. In particular, we give an example that in the adiabatic limit, unitarity is lost before other relevant questions can be addressed.
Evidence for Secondary Emission as the Origin of Hard Spectra in TeV Blazars
NASA Astrophysics Data System (ADS)
Zheng, Y. G.; Kang, T.
2013-02-01
We develop a model for the possible origin of hard, very high energy (VHE) spectra from a distant blazar. In the model, both the primary photons produced in the source and secondary photons produced outside it contribute to the observed high-energy γ-ray emission. That is, the primary photons are produced through the synchrotron self-Compton process, and the secondary photons are produced through high-energy proton interactions with background photons along the line of sight. We apply the model to a characteristic case of VHE γ-ray emission in the distant blazar 1ES 1101-232. Assuming suitable electron and proton spectra, we obtain excellent fits to the observed spectra of this blazar. This indicated that the surprisingly low attenuation of the high-energy γ-rays, especially the shape of the VHE γ-ray tail of the observed spectra, can be explained by secondary γ-rays produced in interactions of cosmic-ray protons with background photons in intergalactic space.
Enzymatic corn wet milling: engineering process and cost model
Ramírez, Edna C; Johnston, David B; McAloon, Andrew J; Singh, Vijay
2009-01-01
Background Enzymatic corn wet milling (E-milling) is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production [1]. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day). These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer®) and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Results Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. Conclusion The E-milling process was found to be cost competitive with the conventional process during periods of high corn feedstock costs since the enzymatic process enhances the yields of the products in a corn wet milling process. This model is available upon request from the authors for educational, research and non-commercial uses. PMID:19154623
A comparison of form processing involved in the perception of biological and nonbiological movements
Thurman, Steven M.; Lu, Hongjing
2016-01-01
Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875
Arancibia-Miranda, Nicolás; Silva-Yumi, Jorge; Escudey, Mauricio
2015-12-15
Modification of surface charge and changes in the isoelectric point (IEP) of synthetic imogolite were studied for various cations in the background electrolyte (K(+), NH4(+), Mg(2+), and Ca(2+)). From the electrophoretic mobility data, it was established that the K(+) (KCl) concentration does not affect the IEP of imogolite; therefore, KCl is a suitable background electrolyte. In terms of the magnitude of changes in the IEP and surface charge, the cations may be ranked in the following order: Mg(2+)≈Ca(2+)>NH4(+)>K(+). Four different kinetic models were used to evaluate the influence of Mg(2+), Ca(2+), NH4(+), and K(+) on the adsorption of Cd and Cu on synthetic imogolite. When adsorption occurs in the presence of cations with the exception of K(+), the kinetics of the process is well described by the pseudo-first order model. On the other hand, when adsorption is conducted in the presence of K(+), the adsorption kinetics is well described by the pseudo-second order, Elovich, and Weber-Morris models. From the surface charge measurements, the affinity between imogolite and the cations and their effect on the adsorption of trace elements, namely Cu and Cd, were established. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Cascaded analysis of signal and noise propagation through a heterogeneous breast model.
Mainprize, James G; Yaffe, Martin J
2010-10-01
The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.
Depth-color fusion strategy for 3-D scene modeling with Kinect.
Camplani, Massimo; Mantecon, Tomas; Salgado, Luis
2013-12-01
Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.
Calibration of the COBE FIRAS instrument
NASA Technical Reports Server (NTRS)
Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.
1994-01-01
The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.
Continuation-like semantics for modeling structural process anomalies
2012-01-01
Background Biomedical ontologies usually encode knowledge that applies always or at least most of the time, that is in normal circumstances. But for some applications like phenotype ontologies it is becoming increasingly important to represent information about aberrations from a norm. These aberrations may be modifications of physiological structures, but also modifications of biological processes. Methods To facilitate precise definitions of process-related phenotypes, such as delayed eruption of the primary teeth or disrupted ocular pursuit movements, I introduce a modeling approach that draws inspiration from the use of continuations in the analysis of programming languages and apply a similar idea to ontological modeling. This approach characterises processes by describing their outcome up to a certain point and the way they will continue in the canonical case. Definitions of process types are then given in terms of their continuations and anomalous phenotypes are defined by their differences to the canonical definitions. Results The resulting model is capable of accurately representing structural process anomalies. It allows distinguishing between different anomaly kinds (delays, interruptions), gives identity criteria for interrupted processes, and explains why normal and anomalous process instances can be subsumed under a common type, thus establishing the connection between canonical and anomalous process-related phenotypes. Conclusion This paper shows how to to give semantically rich definitions of process-related phenotypes. These allow to expand the application areas of phenotype ontologies beyond literature annotation and establishment of genotype-phenotype associations to the detection of anomalies in suitably encoded datasets. PMID:23046705
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2013-01-01
Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097
Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.
Sporns, O; Tononi, G; Edelman, G M
1991-01-01
The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.
Modeling Perceptual Grouping and Figure-Ground Segregation by Means of Active Reentrant Connections
NASA Astrophysics Data System (ADS)
Sporns, Olaf; Tononi, Giulio; Edelman, Gerald M.
1991-01-01
The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.
Background Error Correlation Modeling with Diffusion Operators
2013-01-01
RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of
A Flexible Cosmic Ultraviolet Background Model
NASA Astrophysics Data System (ADS)
McQuinn, Matthew
2016-10-01
HST studies of the IGM, of the CGM, and of reionization-era galaxies are all aided by ionizing background models, which are a critical input in modeling the ionization state of diffuse, 10^4 K gas. The ionization state in turn enables the determination of densities and sizes of absorbing clouds and, when applied to the Ly-a forest, the global ionizing emissivity of sources. Unfortunately, studies that use these background models have no way of gauging the amount of uncertainty in the adopted model other than to recompute their results using previous background models with outdated observational inputs. As of yet there has been no systematic study of uncertainties in the background model and there unfortunately is no publicly available ultraviolet background code. A public code would enable users to update the calculation with the latest observational constraints, and it would allow users to experiment with varying the background model's assumptions regarding emissions and absorptions. We propose to develop a publicly available ionizing background code and, as an initial application, quantify the level of uncertainty in the ionizing background spectrum across cosmic time. As the background model improves, so does our understanding of (1) the sources that dominate ionizing emissions across cosmic time and (2) the properties of diffuse gas in the circumgalactic medium, the WHIM, and the Ly-a forest. HST is the primary telescope for studying both the highest redshift galaxies and low-redshift diffuse gas. The proposed program would benefit HST studies of the Universe at z 0 all the way up to z = 10, including of high-z galaxies observed in the HST Frontier Fields.
Software algorithms for false alarm reduction in LWIR hyperspectral chemical agent detection
NASA Astrophysics Data System (ADS)
Manolakis, D.; Model, J.; Rossacci, M.; Zhang, D.; Ontiveros, E.; Pieper, M.; Seeley, J.; Weitz, D.
2008-04-01
The long-wave infrared (LWIR) hyperpectral sensing modality is one that is often used for the problem of detection and identification of chemical warfare agents (CWA) which apply to both military and civilian situations. The inherent nature and complexity of background clutter dictates a need for sophisticated and robust statistical models which are then used in the design of optimum signal processing algorithms that then provide the best exploitation of hyperspectral data to ultimately make decisions on the absence or presence of potentially harmful CWAs. This paper describes the basic elements of an automated signal processing pipeline developed at MIT Lincoln Laboratory. In addition to describing this signal processing architecture in detail, we briefly describe the key signal models that form the foundation of these algorithms as well as some spatial processing techniques used for false alarm mitigation. Finally, we apply this processing pipeline to real data measured by the Telops FIRST hyperspectral (FIRST) sensor to demonstrate its practical utility for the user community.
Integrating Human Factors into Crew Exploration Vehicle (CEV) Design
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Holden, Kritina; Baggerman, Susan; Campbell, Paul
2007-01-01
The purpose of this design process is to apply Human Engineering (HE) requirements and guidelines to hardware/software and to provide HE design, analysis and evaluation of crew interfaces. The topics include: 1) Background/Purpose; 2) HE Activities; 3) CASE STUDY: Net Habitable Volume (NHV) Study; 4) CASE STUDY: Human Modeling Approach; 5) CASE STUDY: Human Modeling Results; 6) CASE STUDY: Human Modeling Conclusions; 7) CASE STUDY: Human-in-the-Loop Evaluation Approach; 8) CASE STUDY: Unsuited Evaluation Results; 9) CASE STUDY: Suited Evaluation Results; 10) CASE STUDY: Human-in-the-Loop Evaluation Conclusions; 11) Near-Term Plan; and 12) In Conclusion
Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard
2011-01-01
Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype–phenotype model, we present here a three-dimensional functional–structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. Methods The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Key Results Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. Conclusions We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed. PMID:21247905
Evaluating the Impacts of ICT Use: A Multi-Level Analysis with Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Song, Hae-Deok; Kang, Taehoon
2012-01-01
The purpose of this study is to evaluate the impacts of ICT use on achievements by considering not only ICT use, but also the process and background variables that influence ICT use at both the student- and school-level. This study was conducted using data from the 2010 Survey of Seoul Education Longitudinal Research. A Hierarchical Linear…
ERIC Educational Resources Information Center
Chen, Shin-Feng
2017-01-01
Background: Reading is an interactive and constructive process of making meaning by engaging a variety of materials and sources and by participating in reading communities at school or in daily life. Aim: The purpose of this study was to explore the factors affecting digital reading literacy among upper-elementary school students. Method: A…
ERIC Educational Resources Information Center
Vandercleyen, François; Boudreau, Pierre; Carlier, Ghislain; Delens, Cécile
2014-01-01
Background: Emotions play a major role in the learning of pre-service teachers. However, there is a lack of in-depth research on emotion in the context of physical education (PE), especially during the practicum. Lazarus's model and its concepts of appraisal and coping is a salient theoretical framework for understanding the emotional process.…
ERIC Educational Resources Information Center
Groff, Warren H.
North Central Technical College's (NCTC's) strategic planning and human resource development model is described in this paper in terms of its role in assisting the college's service area in adapting to new technologies. First, background information is presented on NCTC's planning process with respect to the strategic goal areas of: (1)…
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Quality Improvement on the Acute Inpatient Psychiatry Unit Using the Model for Improvement
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
Background A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. Methods We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients—those starting or continuing on standing neuroleptics—with the Abnormal Involuntary Movement Scale (AIMS). Results After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Conclusion Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team. PMID:24052768
A novel star extraction method based on modified water flow model
NASA Astrophysics Data System (ADS)
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Ouyang, Zibiao; Yang, Yanqiang
2017-11-01
Star extraction is the essential procedure for attitude measurement of star sensor. The great challenge for star extraction is to segment star area exactly from various noise and background. In this paper, a novel star extraction method based on Modified Water Flow Model(MWFM) is proposed. The star image is regarded as a 3D terrain. The morphology is adopted for noise elimination and Tentative Star Area(TSA) selection. Star area can be extracted through adaptive water flowing within TSAs. This method can achieve accurate star extraction with improved efficiency under complex conditions such as loud noise and uneven backgrounds. Several groups of different types of star images are processed using proposed method. Comparisons with existing methods are conducted. Experimental results show that MWFM performs excellently under different imaging conditions. The star extraction rate is better than 95%. The star centroid accuracy is better than 0.075 pixels. The time-consumption is also significantly reduced.
FDA 2011 process validation guidance: lifecycle compliance model.
Campbell, Cliff
2014-01-01
This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.
Infrared target simulation environment for pattern recognition applications
NASA Astrophysics Data System (ADS)
Savakis, Andreas E.; George, Nicholas
1994-07-01
The generation of complete databases of IR data is extremely useful for training human observers and testing automatic pattern recognition algorithms. Field data may be used for realism, but require expensive and time-consuming procedures. IR scene simulation methods have emerged as a more economical and efficient alternative for the generation of IR databases. A novel approach to IR target simulation is presented in this paper. Model vehicles at 1:24 scale are used for the simulation of real targets. The temperature profile of the model vehicles is controlled using resistive circuits which are embedded inside the models. The IR target is recorded using an Inframetrics dual channel IR camera system. Using computer processing we place the recorded IR target in a prerecorded background. The advantages of this approach are: (1) the range and 3D target aspect can be controlled by the relative position between the camera and model vehicle; (2) the temperature profile can be controlled by adjusting the power delivered to the resistive circuit; (3) the IR sensor effects are directly incorporated in the recording process, because the real sensor is used; (4) the recorded target can embedded in various types of backgrounds recorded under different weather conditions, times of day etc. The effectiveness of this approach is demonstrated by generating an IR database of three vehicles which is used to train a back propagation neural network. The neural network is capable of classifying vehicle type, vehicle aspect, and relative temperature with a high degree of accuracy.
Verschuur, Carl
2009-03-01
Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.
2012-01-01
Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054
Synergic effects of 10°/s constant rotation and rotating background on visual cognitive processing
NASA Astrophysics Data System (ADS)
He, Siyang; Cao, Yi; Zhao, Qi; Tan, Cheng; Niu, Dongbin
In previous studies we have found that constant low-speed rotation facilitated the auditory cognitive process and constant velocity rotation background sped up the perception, recognition and assessment process of visual stimuli. In the condition of constant low-speed rotation body is exposed into a new physical state. In this study the variations of human brain's cognitive process under the complex condition of constant low-speed rotation and visual rotation backgrounds with different speed were explored. 14 university students participated in the ex-periment. EEG signals were recorded when they were performing three different cognitive tasks with increasing mental load, that is no response task, selective switch responses task and selec-tive mental arithmetic task. Rotary chair was used to create constant low-speed10/srotation. Four kinds of background were used in this experiment, they were normal black background and constant 30o /s, 45o /s or 60o /s rotating simulated star background. The P1 and N1 compo-nents of brain event-related potentials (ERP) were analyzed to detect the early visual cognitive processing changes. It was found that compared with task performed under other backgrounds, the posterior P1 and N1 latencies were shortened under 45o /s rotating background in all kinds of cognitive tasks. In the no response task, compared with task performed under black back-ground, the posterior N1 latencies were delayed under 30o /s rotating background. In the selec-tive switch responses task and selective mental arithmetic task, compared with task performed under other background, the P1 latencies were lengthened under 60o /s rotating background, but the average amplitudes of the posterior P1 and N1 were increased. It was suggested that under constant 10/s rotation, the facilitated effect of rotating visual background were changed to an inhibited one in 30o /s rotating background. Under vestibular new environment, not all of the rotating backgrounds accelerated the early process of visual cognition. There is a synergic effect between the effects of constant low-speed rotation and rotating speed of the background. Under certain conditions, they both served to facilitate the visual cognitive processing, and it had been started at the stage when extrastriate cortex perceiving the visual signal. Under the condition of constant low-speed rotation in higher cognitive load tasks, the rapid rotation of the background enhanced the magnitude of the signal transmission in the visual path, making signal to noise ratio increased and a higher signal to noise ratio is clearly in favor of target perception and recognition. This gave rise to the hypothesis that higher cognitive load tasks with higher top-down control had more power in counteracting the inhibition effect of higher velocity rotation background. Acknowledgements: This project was supported by National Natural Science Foundation of China (No. 30670715) and National High Technology Research and Development Program of China (No.2007AA04Z254).
An interdisciplinary approach for earthquake modelling and forecasting
NASA Astrophysics Data System (ADS)
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
Multilevel latent class casemix modelling: a novel approach to accommodate patient casemix
2011-01-01
Background Using routinely collected patient data we explore the utility of multilevel latent class (MLLC) models to adjust for patient casemix and rank Trust performance. We contrast this with ranks derived from Trust standardised mortality ratios (SMRs). Methods Patients with colorectal cancer diagnosed between 1998 and 2004 and resident in Northern and Yorkshire regions were identified from the cancer registry database (n = 24,640). Patient age, sex, stage-at-diagnosis (Dukes), and Trust of diagnosis/treatment were extracted. Socioeconomic background was derived using the Townsend Index. Outcome was survival at 3 years after diagnosis. MLLC-modelled and SMR-generated Trust ranks were compared. Results Patients were assigned to two classes of similar size: one with reasonable prognosis (63.0% died within 3 years), and one with better prognosis (39.3% died within 3 years). In patient class one, all patients diagnosed at stage B or C died within 3 years; in patient class two, all patients diagnosed at stage A, B or C survived. Trusts were assigned two classes with 51.3% and 53.2% of patients respectively dying within 3 years. Differences in the ranked Trust performance between the MLLC model and SMRs were all within estimated 95% CIs. Conclusions A novel approach to casemix adjustment is illustrated, ranking Trust performance whilst facilitating the evaluation of factors associated with the patient journey (e.g. treatments) and factors associated with the processes of healthcare delivery (e.g. delays). Further research can demonstrate the value of modelling patient pathways and evaluating healthcare processes across provider institutions. PMID:21362172
Strain-dependent activation energy of shear transformation in metallic glasses
NASA Astrophysics Data System (ADS)
Xu, Bin; Falk, Michael; Li, Jinfu; Kong, Lingti
2017-04-01
Shear transformation (ST) plays a decisive role in determining the mechanical behavior of metallic glasses, which is believed to be a stress-assisted thermally activated process. Understanding the dependence in its activation energy on the stress imposed on the material is of central importance to model the deformation process of metallic glasses and other amorphous solids. Here a theoretical model is proposed to predict the variation of the minimum energy path (MEP) associated with a particular ST event upon further deformation. Verification based on atomistic simulations and calculations are also conducted. The proposed model reproduces the MEP and activation energy of an ST event under different imposed macroscopic strains based on a known MEP at a reference strain. Moreover, an analytical approach is proposed based on the atomistic calculations, which works well when the stress varies linearity along the MEP. These findings provide necessary background for understanding the activation processes and, in turn, the mechanical behavior of metallic glasses.
Is the continuous matter creation cosmology an alternative to ΛCDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabris, J.C.; Pacheco, J.A. de Freitas; Piattella, O.F., E-mail: fabris@pq.cnpq.br, E-mail: pacheco@oca.eu, E-mail: oliver.piattella@pq.cnpq.br
2014-06-01
The matter creation cosmology is revisited, including the evolution of baryons and dark matter particles. The creation process affects only dark matter and not baryons. The dynamics of the ΛCDM model can be reproduced only if two conditions are satisfied: 1) the entropy density production rate and the particle density variation rate are equal and 2) the (negative) pressure associated to the creation process is constant. However, the matter creation model predicts a present dark matter-to-baryon ratio much larger than that observed in massive X-ray clusters of galaxies, representing a potential difficulty for the model. In the linear regime, amore » fully relativistic treatment indicates that baryons are not affected by the creation process but this is not the case for dark matter. Both components evolve together at early phases but lately the dark matter density contrast decreases since the background tends to a constant value. This behaviour produces a negative growth factor, in disagreement with observations, being a further problem for this cosmology.« less
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Brown, B. C.; Bugel, L.; Cheng, G.; Church, E. D.; Conrad, J. M.; Dharmapalan, R.; Djurcic, Z.; Finley, D. A.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Grange, J.; Huelsnitz, W.; Ignarra, C.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Katori, T.; Kobilarcik, T.; Louis, W. C.; Mariani, C.; Marsh, W.; Mills, G. B.; Mirabal, J.; Moore, C. D.; Mousseau, J.; Nienaber, P.; Osmanov, B.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Ray, H.; Roe, B. P.; Russell, A. D.; Shaevitz, M. H.; Spitz, J.; Stancu, I.; Tayloe, R.; Van de Water, R. G.; Wascko, M. O.; White, D. H.; Wickremasinghe, D. A.; Zeller, G. P.; Zimmerman, E. D.
2013-08-01
The largest sample ever recorded of ν¯μ charged-current quasielastic (CCQE, ν¯μ+p→μ++n) candidate events is used to produce the minimally model-dependent, flux-integrated double-differential cross section (d2σ)/(dTμdcosθμ) for ν¯μ CCQE for a mineral oil target. This measurement exploits the large statistics of the MiniBooNE antineutrino mode sample and provides the most complete information of this process to date. In order to facilitate historical comparisons, the flux-unfolded total cross section σ(Eν) and single-differential cross section (dσ)/(dQ2) on both mineral oil and on carbon are also reported. The observed cross section is somewhat higher than the predicted cross section from a model assuming independently acting nucleons in carbon with canonical form factor values. The shape of the data are also discrepant with this model. These results have implications for intranuclear processes and can help constrain signal and background processes for future neutrino oscillation measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan, E-mail: emb@kth.se, E-mail: scl@kth.se
Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
Bridging Technometric Method and Innovation Process: An Initial Study
NASA Astrophysics Data System (ADS)
Rumanti, A. A.; Reynaldo, R.; Samadhi, T. M. A. A.; Wiratmadja, I. I.; Dwita, A. C.
2018-03-01
The process of innovation is one of ways utilized to increase the capability of a technology component that reflects the need of SME. Technometric method can be used to identify to what extent the level of technology advancement in a SME is, and also which technology component that needs to be maximized in order to significantly deliver an innovation. This paper serves as an early study, which lays out a conceptual framework that identifies and elaborates the principles of innovation process from a well-established innovation model by Martin with the technometric method, based on the initial background research conducted at SME Ira Silver in Jogjakarta, Indonesia.
SysML: A Language for Space System Engineering
NASA Astrophysics Data System (ADS)
Mazzini, S.; Strangapede, A.
2008-08-01
This paper presents the results of an ESA/ESTEC internal study, performed with the support of INTECS, about modeling languages to support Space System Engineering activities and processes, with special emphasis on system requirements identification and analysis. The study was focused on the assessment of dedicated UML profiles, their positioning alongside the system and software life cycles and associated methodologies. Requirements for a Space System Requirements Language were identified considering the ECSS-E-10 and ECSS-E_40 processes. The study has identified SysML as a very promising language, having as theoretical background the reference system processes defined by the ISO15288, as well as industrial practices.
Lyu, Zhe; Whitman, William B
2017-01-01
Current evolutionary models suggest that Eukaryotes originated from within Archaea instead of being a sister lineage. To test this model of ancient evolution, we review recent studies and compare the three major information processing subsystems of replication, transcription and translation in the Archaea and Eukaryotes. Our hypothesis is that if the Eukaryotes arose within the archaeal radiation, their information processing systems will appear to be one of kind and not wholly original. Within the Eukaryotes, the mammalian or human systems are emphasized because of their importance in understanding health. Biochemical as well as genetic studies provide strong evidence for the functional similarity of archaeal homologs to the mammalian information processing system and their dissimilarity to the bacterial systems. In many independent instances, a simple archaeal system is functionally equivalent to more elaborate eukaryotic homologs, suggesting that evolution of complexity is likely an central feature of the eukaryotic information processing system. Because fewer components are often involved, biochemical characterizations of the archaeal systems are often easier to interpret. Similarly, the archaeal cell provides a genetically and metabolically simpler background, enabling convenient studies on the complex information processing system. Therefore, Archaea could serve as a parsimonious and tractable host for studying human diseases that arise in the information processing systems.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
[On the present situation in psychotherapy and its implications - A critical analysis of the facts].
Tschuschke, Volker; Freyberger, Harald J
2015-01-01
The currently dominating research paradigm in evidence-based medicine is expounded and discussed regarding the problems deduced from so-called empirically supported treatments (EST) in psychology and psychotherapy. Prevalent political and economic as well as ideological backgrounds influence the present dominance of the medical model in psychotherapy by implementing the randomized-controlled research design as the standard in the field. It has been demonstrated that randomized controlled trials (RCTs) are inadequate in psychotherapy research, not the least because of the high complexity of the psychotherapy and the relatively weak role of the treatment concept in the change process itself. All major meta-analyses show that the Dodo bird verdict is still alive, thereby demonstrating that the medical model in psychotherapy with its RCT paradigm cannot explain the equivalence paradox. The medical model is inappropriate, so that the contextual model is proposed as an alternative. Extensive process-outcome research is suggested as the only viable and reasonable way to identify highly complex interactions between the many factors regularly involved in change processes in psychotherapy.
NASA Astrophysics Data System (ADS)
Johnson, Christopher W.; Fu, Yuning; Bürgmann, Roland
2017-12-01
Stresses in the lithosphere arise from multiple natural loading sources that include both surface and body forces. The largest surface loads include near-surface water storage, snow and ice, atmosphere pressure, ocean loading, and temperature changes. The solid Earth also deforms from celestial body interactions and variations in Earth's rotation. We model the seasonal stress changes in California from 2006 through 2014 for seven different loading sources with annual periods to produce an aggregate stressing history for faults in the study area. Our modeling shows that the annual water loading, atmosphere, temperature, and Earth pole tides are the largest loading sources and should each be evaluated to fully describe seasonal stress changes. In California we find that the hydrological loads are the largest source of seasonal stresses. We explore the seasonal stresses with respect to the background principal stress orientation constrained with regional focal mechanisms and analyze the modulation of seismicity. Our results do not suggest a resolvable seasonal variation for the ambient stress orientation in the shallow crust. When projecting the seasonal stresses into the background stress orientation we find that the timing of microseismicity modestly increases from an 8 kPa seasonal mean-normal-stress perturbation. The results suggest that faults in California are optimally oriented with the background stress field and respond to subsurface pressure changes, possibly due to processes we have not considered in this study. At any time a population of faults are near failure as evident from earthquakes triggered by these slight seasonal stress perturbations.
Wang, Hongguang
2018-01-01
Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model
Marsh, John E.; Campbell, Tom A.
2016-01-01
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model.
Marsh, John E; Campbell, Tom A
2016-01-01
The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.
Chan, Chu-Fang; Kuo, Tzu-Wei; Weng, Ju-Yun; Lin, Yen-Chu; Chen, Ting-Yu; Cheng, Jen-Kun; Lien, Cheng-Chang
2013-01-01
Glutamatergic transmission onto oligodendrocyte precursor cells (OPCs) may regulate OPC proliferation, migration and differentiation. Dendritic integration of excitatory postsynaptic potentials (EPSPs) is critical for neuronal functions, and mechanisms regulating dendritic propagation and summation of EPSPs are well understood. However, little is known about EPSP attenuation and integration in OPCs. We developed realistic OPC models for synaptic integration, based on passive membrane responses of OPCs obtained by simultaneous dual whole-cell patch-pipette recordings. Compared with neurons, OPCs have a very low value of membrane resistivity, which is largely mediated by Ba2+- and bupivacaine-sensitive background K+ conductances. The very low membrane resistivity not only leads to rapid EPSP attenuation along OPC processes but also sharpens EPSPs and narrows the temporal window for EPSP summation. Thus, background K+ conductances regulate synaptic responses and integration in OPCs, thereby affecting activity-dependent neuronal control of OPC development and function. PMID:23940377
Clues on the origin of post-2000 earthquakes at Campi Flegrei caldera (Italy).
Chiodini, G; Selva, J; Del Pezzo, E; Marsan, D; De Siena, L; D'Auria, L; Bianco, F; Caliro, S; De Martino, P; Ricciolino, P; Petrillo, Z
2017-06-30
The inter-arrival times of the post 2000 seismicity at Campi Flegrei caldera are statistically distributed into different populations. The low inter-arrival times population represents swarm events, while the high inter-arrival times population marks background seismicity. Here, we show that the background seismicity is increasing at the same rate of (1) the ground uplift and (2) the concentration of the fumarolic gas specie more sensitive to temperature. The seismic temporal increase is strongly correlated with the results of recent simulations, modelling injection of magmatic fluids in the Campi Flegrei hydrothermal system. These concurrent variations point to a unique process of temperature-pressure increase of the hydrothermal system controlling geophysical and geochemical signals at the caldera. Our results thus show that the occurrence of background seismicity is an excellent parameter to monitor the current unrest of the caldera.
Input comparison of radiogenic neutron estimates for ultra-low background experiments
NASA Astrophysics Data System (ADS)
Cooley, J.; Palladino, K. J.; Qiu, H.; Selvi, M.; Scorza, S.; Zhang, C.
2018-04-01
Ultra-low-background experiments address some of the most important open questions in particle physics, cosmology and astrophysics: the nature of dark matter, whether the neutrino is its own antiparticle, and does the proton decay. These rare event searches require well-understood and minimized backgrounds. Simulations are used to understand backgrounds caused by naturally occurring radioactivity in the rock and in every piece of shielding and detector material used in these experiments. Most important are processes like spontaneous fission and (α,n) reactions in material close to the detectors that can produce neutrons. A comparison study of the (α,n) reactions between two dedicated software packages is detailed. The cross section libraries, neutron yields, and spectra from the Mei-Zhang-Hime and the SOURCES-4A codes are presented. The resultant yields and spectra are used as inputs to direct dark matter detector toy models in GEANT4, to study the impact of their differences on background estimates and fits. Although differences in neutron yield calculations up to 50% were seen, there was no systematic difference between the Mei-Hime-Zhang and SOURCES-4A results. Neutron propagation simulations smooth differences in spectral shape and yield, and both tools were found to meet the broad requirements of the low-background community.
Removing Background Noise with Phased Array Signal Processing
NASA Technical Reports Server (NTRS)
Podboy, Gary; Stephens, David
2015-01-01
Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.
Source detection in astronomical images by Bayesian model comparison
NASA Astrophysics Data System (ADS)
Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher
2014-12-01
The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.
Adaptation, saturation, and physiological masking in single auditory-nerve fibers.
Smith, R L
1979-01-01
Results are reviewed concerning some effects, at a units's characteristic frequency, of a short-term conditioning stimulus on the responses to perstimulatory and poststimulatory test tones. A phenomenological equation is developed from the poststimulatory results and shown to be consistent with the perstimulatory results. According to the results and equation, the response to a test tone equals the unconditioned or unadapted response minus the decrement produced by adaptation to the conditioning tone. Furthermore, the decrement is proportional to the driven response to the conditioning tone and does not depend on sound intensity per se. The equation has a simple interpretation in terms of two processes in cascade--a static saturating nonlinearity followed by additive adaptation. Results are presented to show that this functional model is sufficient to account for the "physiological masking" produced by wide-band backgrounds. According to this interpretation, a sufficiently intense background produces saturation. Consequently, a superimposed test tone cause no change in response. In addition, when the onset of the background precedes the onset of the test tone, the total firing rate is reduced by adaptation. Evidence is reviewed concerning the possible correspondence between the variables in the model and intracellular events in the auditory periphery.
Mutant power: using mutant allele collections for yeast functional genomics.
Norman, Kaitlyn L; Kumar, Anuj
2016-03-01
The budding yeast has long served as a model eukaryote for the functional genomic analysis of highly conserved signaling pathways, cellular processes and mechanisms underlying human disease. The collection of reagents available for genomics in yeast is extensive, encompassing a growing diversity of mutant collections beyond gene deletion sets in the standard wild-type S288C genetic background. We review here three main types of mutant allele collections: transposon mutagen collections, essential gene collections and overexpression libraries. Each collection provides unique and identifiable alleles that can be utilized in genome-wide, high-throughput studies. These genomic reagents are particularly informative in identifying synthetic phenotypes and functions associated with essential genes, including those modeled most effectively in complex genetic backgrounds. Several examples of genomic studies in filamentous/pseudohyphal backgrounds are provided here to illustrate this point. Additionally, the limitations of each approach are examined. Collectively, these mutant allele collections in Saccharomyces cerevisiae and the related pathogenic yeast Candida albicans promise insights toward an advanced understanding of eukaryotic molecular and cellular biology. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
NASA Technical Reports Server (NTRS)
Dehoff, R. L.; Reed, W. B.; Trankle, T. L.
1977-01-01
The development and validation of a spey engine model is described. An analysis of the dynamical interactions involved in the propulsion unit is presented. The model was reduced to contain only significant effects, and was used, in conjunction with flight data obtained from an augmentor wing jet STOL research aircraft, to develop initial estimates of parameters in the system. The theoretical background employed in estimating the parameters is outlined. The software package developed for processing the flight data is described. Results are summarized.
ERIC Educational Resources Information Center
Rocchio, Richard; Lee, Eve
This guide demonstrates a new way of utilizing the planning process within a social movement context in view of developing a state master plan for environmental education. In addition the book serves as a guide to realistic planning, including models, definitions, and examples. The guide contains five parts: Part One - The background, Part Two -…
David P Turner; William D Ritts; Robert E Kennedy; Andrew N Gray; Zhiqiang Yang
2015-01-01
Background: Disturbance is a key influence on forest carbon dynamics, but the complexity of spatial and temporal patterns in forest disturbance makes it difficult to quantify their impacts on carbon flux over broad spatial domains. Here we used a time series of Landsat remote sensing images and a climate-driven carbon cycle process model to evaluate carbon fluxes at...
ERIC Educational Resources Information Center
Aharony, Noa
2006-01-01
Background: The learning context is learning English in an Internet environment. The examination of this learning process was based on the Biggs and Moore's teaching-learning model (Biggs & Moore, 1993). Aim: The research aims to explore the use of the deep and surface strategies in an Internet environment among EFL students who come from…
ERIC Educational Resources Information Center
Larkin, Peter; Jahoda, Andrew; MacMahon, Ken
2013-01-01
Background: There is an established evidence base concerning the use of anger management interventions with violent offenders who have intellectual disabilities. However, there has been limited research investigating the role of social cognitive factors underpinning problems of aggression. Psychosocial sources of aggression in the non-disabled…
On the potential energy in a gravitationally bound two-body system
NASA Astrophysics Data System (ADS)
Wilhelm, Klaus; Dwivedi, Bhola N.
2015-01-01
The potential energy problem in a gravitationally bound two-body system is studied in the framework of a recently proposed impact model of gravity (Wilhelm et al., 2013). The concept of a closed system has been modified, before the physical processes resulting in the liberation of the potential energy can be described. The energy is extracted from the background flux of hypothetical interaction entities.
The Mechanics of Embodiment: A Dialog on Embodiment and Computational Modeling
Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.
2011-01-01
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamoring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensorimotor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialog between two fictional characters: Ernest, the “experimenter,” and Mary, the “computational modeler.” The dialog consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modeling. PMID:21713184
Multi-scale coupled modelling of waves and currents on the Catalan shelf.
NASA Astrophysics Data System (ADS)
Grifoll, M.; Warner, J. C.; Espino, M.; Sánchez-Arcilla, A.
2012-04-01
Catalan shelf circulation is characterized by a background along-shelf flow to the southwest (including some meso-scale features) plus episodic storm driven patterns. To investigate these dynamics, a coupled multi-scale modeling system is applied to the Catalan shelf (North-western Mediterranean Sea). The implementation consists of a set of increasing-resolution nested models, based on the circulation model ROMS and the wave model SWAN as part of the COAWST modeling system, covering from the slope and shelf region (~1 km horizontal resolution) down to a local area around Barcelona city (~40 m). The system is initialized with MyOcean products in the coarsest outer domain, and uses atmospheric forcing from other sources for the increasing resolution inner domains. Results of the finer resolution domains exhibit improved agreement with observations relative to the coarser model results. Several hydrodynamic configurations were simulated to determine dominant forcing mechanisms and hydrodynamic processes that control coastal scale processes. The numerical results reveal that the short term (hours to days) inner-shelf variability is strongly influenced by local wind variability, while sea-level slope, baroclinic effects, radiation stresses and regional circulation constitute second-order processes. Additional analysis identifies the significance of shelf/slope exchange fluxes, river discharge and the effect of the spatial resolution of the atmospheric fluxes.
A flexible framework for process-based hydraulic and water ...
Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However, GI model predictions are being relied upon by many municipalities and State/Local agencies to make decisions about grey vs. green infrastructure improvement planning. Adding complexity to GI modeling frameworks may preclude their use in simpler urban planning situations. Therefore, the goal here was to develop a sophisticated, yet flexible tool that could be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media used in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biophysical processes affecting contaminants such as reactions, and particle-associated transport accurately while maintaining a high degree of flexibly to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated.Framework Features The process-based model framework developed here can be used to model a diverse range of GI practices such as green roof, retention pond, bioretention, infiltration trench, permeable pavement and
Integration of the Gene Ontology into an object-oriented architecture
Shegogue, Daniel; Zheng, W Jim
2005-01-01
Background To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Results Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). Conclusion We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes. PMID:15885145
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Formaldehyde Production From Isoprene Oxidation Across NOx Regimes
NASA Technical Reports Server (NTRS)
Wolfe, G. M.; Kaiser, J.; Hanisco, T. F.; Keutsch, F. N.; de Gouw, J. A.; Gilman, J. B.; Graus, M.; Hatch, C. D.; Holloway, J.; Horowitz, L. W.;
2016-01-01
The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NOx (= NO + NO2). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the southeast US, we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a prompt yield of HCHO (molecules of HCHO produced per molecule of freshly emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NOx values (roughly 0.1 - 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv(exp. -10), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady-state box model. Both models reproduce the NOx dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models underestimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or underestimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NOx values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO.
Formaldehyde production from isoprene oxidation across NOx regimes
Wolfe, G. M.; Kaiser, J.; Hanisco, T. F.; Keutsch, F. N.; de Gouw, J. A.; Gilman, J. B.; Graus, M.; Hatch, C. D.; Holloway, J.; Horowitz, L. W.; Lee, B. H.; Lerner, B. M.; Lopez-Hilifiker, F.; Mao, J.; Marvin, M. R.; Peischl, J.; Pollack, I. B.; Roberts, J. M.; Ryerson, T. B.; Thornton, J. A.; Veres, P. R.; Warneke, C.
2018-01-01
The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NOx (= NO + NO2). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the Southeast U.S., we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a “prompt” yield of HCHO (molecules of HCHO produced per molecule of freshly-emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NOx values (roughly 0.1 – 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv−1), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady state box model. Both models reproduce the NOx dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models under-estimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or under-estimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NOx values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO. PMID:29619046
Formaldehyde production from isoprene oxidation across NOx regimes.
Wolfe, G M; Kaiser, J; Hanisco, T F; Keutsch, F N; de Gouw, J A; Gilman, J B; Graus, M; Hatch, C D; Holloway, J; Horowitz, L W; Lee, B H; Lerner, B M; Lopez-Hilifiker, F; Mao, J; Marvin, M R; Peischl, J; Pollack, I B; Roberts, J M; Ryerson, T B; Thornton, J A; Veres, P R; Warneke, C
2016-01-01
The chemical link between isoprene and formaldehyde (HCHO) is a strong, non-linear function of NO x (= NO + NO 2 ). This relationship is a linchpin for top-down isoprene emission inventory verification from orbital HCHO column observations. It is also a benchmark for overall photochemical mechanism performance with regard to VOC oxidation. Using a comprehensive suite of airborne in situ observations over the Southeast U.S., we quantify HCHO production across the urban-rural spectrum. Analysis of isoprene and its major first-generation oxidation products allows us to define both a "prompt" yield of HCHO (molecules of HCHO produced per molecule of freshly-emitted isoprene) and the background HCHO mixing ratio (from oxidation of longer-lived hydrocarbons). Over the range of observed NO x values (roughly 0.1 - 2 ppbv), the prompt yield increases by a factor of 3 (from 0.3 to 0.9 ppbv ppbv -1 ), while background HCHO increases by a factor of 2 (from 1.6 to 3.3 ppbv). We apply the same method to evaluate the performance of both a global chemical transport model (AM3) and a measurement-constrained 0-D steady state box model. Both models reproduce the NO x dependence of the prompt HCHO yield, illustrating that models with updated isoprene oxidation mechanisms can adequately capture the link between HCHO and recent isoprene emissions. On the other hand, both models under-estimate background HCHO mixing ratios, suggesting missing HCHO precursors, inadequate representation of later-generation isoprene degradation and/or under-estimated hydroxyl radical concentrations. Detailed process rates from the box model simulation demonstrate a 3-fold increase in HCHO production across the range of observed NO x values, driven by a 100% increase in OH and a 40% increase in branching of organic peroxy radical reactions to produce HCHO.
[Epistemic injustice during the medical education process in the hospital context].
Consejo-Y Chapela, Carolina; Viesca-Treviño, Carlos Alfonso
2017-01-01
The educational model adopted by the Universidad Nacional Autónoma de México (UNAM) Faculty of Medicine is constructivist; it is a model based on competence development. It aims to provide learning environments that incorporate real activities (it helps the students to develop social negotiation skills, as part of their integral learning; it encourages them to take a critical and reflexive approach; and it is also a student-centered model). However, many challenges arise when this model is implemented in the context of hospital environments. Therefore, our aim was to analyse the hospital as an hermeneutical community and as a power relations scenario, contrary to the constructivist model. In the analysis of a conflict between a chief of a medical department and an undergraduated medical intern, we use Miranda Fricker's categories discriminatory epistemic injustice, and testimonial injustice, as well as Foucault's power relationships and knowledge. The program implementation is placed in the context of power relations and different disciplinary methods that could affect the training process of the students, whose educational background belongs to the constructivist model. This in part is due to the existence of informal normative structures that are hidden in the process of medical knowledge construction at the hospital scenario. Practices of epistemic discriminatory injustice in the hospital environment increase vulnerability conditions for medical students in their education process.
Brooks, Matthew; Graham-Kevan, Nicola; Lowe, Michelle; Robinson, Sarita
2017-09-01
The Cognitive Growth and Stress (CGAS) model draws together cognitive processing factors previously untested into a single model. Intrusive rumination, deliberate rumination, present and future perceptions of control, and event centrality were assessed as predictors of post-traumatic growth (PTG) and post-traumatic stress (PTS). The CGAS model is tested on a sample of survivors (N = 250) of a diverse range of adverse events using structural equation modelling techniques. Overall, the best fitting model was supportive of the theorized relations between cognitive constructs and accounted for 30% of the variance in PTG and 68% of the variance in PTS across the sample. Rumination, centrality, and perceived control factors are significant determinants of positive and negative psychological change across the wide spectrum of adversarial events. In its first phase of development, the CGAS model also provides further evidence of the distinct processes of growth and distress following adversity. Clinical implications People can experience positive change after adversity, regardless of life background or types of events experienced. While growth and distress are possible outcomes after adversity, they occur through distinct processes. Support or intervention should consider rumination, event centrality, and perceived control factors to enhance psychological well-being. Cautions/limitations Longitudinal research would further clarify the findings found in this study. Further extension of the model is recommended to include other viable cognitive processes implicated in the development of positive and negative changes after adversity. © 2017 The British Psychological Society.
Bayesian model selection validates a biokinetic model for zirconium processing in humans
2012-01-01
Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152
Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.
2002-01-01
A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.
2011-01-01
Background Eukaryotic cells possess a complex network of RNA machineries which function in RNA-processing and cellular regulation which includes transcription, translation, silencing, editing and epigenetic control. Studies of model organisms have shown that many ncRNAs of the RNA-infrastructure are highly conserved, but little is known from non-model protists. In this study we have conducted a genome-scale survey of medium-length ncRNAs from the protozoan parasites Giardia intestinalis and Trichomonas vaginalis. Results We have identified the previously 'missing' Giardia RNase MRP RNA, which is a key ribozyme involved in pre-rRNA processing. We have also uncovered 18 new H/ACA box snoRNAs, expanding our knowledge of the H/ACA family of snoRNAs. Conclusions Results indicate that Giardia intestinalis and Trichomonas vaginalis, like their distant multicellular relatives, contain a rich infrastructure of RNA-based processing. From here we can investigate the evolution of RNA processing networks in eukaryotes. PMID:22053856
An Effective Method for Modeling Two-dimensional Sky Background of LAMOST
NASA Astrophysics Data System (ADS)
Haerken, Hasitieer; Duan, Fuqing; Zhang, Jiannan; Guo, Ping
2017-06-01
Each CCD of LAMOST accommodates 250 spectra, while about 40 are used to observe sky background during real observations. How to estimate the unknown sky background information hidden in the observed 210 celestial spectra by using the known 40 sky spectra is the problem we solve. In order to model the sky background, usually a pre-observation is performed with all fibers observing sky background. We use the observed 250 skylight spectra as training data, where those observed by the 40 fibers are considered as a base vector set. The Locality-constrained Linear Coding (LLC) technique is utilized to represent the skylight spectra observed by the 210 fibers with the base vector set. We also segment each spectrum into small parts, and establish the local sky background model for each part. Experimental results validate the proposed method, and show the local model is better than the global model.
System Dynamics Modeling for Public Health: Background and Opportunities
Homer, Jack B.; Hirsch, Gary B.
2006-01-01
The systems modeling methodology of system dynamics is well suited to address the dynamic complexity that characterizes many public health issues. The system dynamics approach involves the development of computer simulation models that portray processes of accumulation and feedback and that may be tested systematically to find effective policies for overcoming policy resistance. System dynamics modeling of chronic disease prevention should seek to incorporate all the basic elements of a modern ecological approach, including disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. System dynamics shows promise as a means of modeling multiple interacting diseases and risks, the interaction of delivery systems and diseased populations, and matters of national and state policy. PMID:16449591
Optimizing an immersion ESL curriculum using analytic hierarchy process.
Tang, Hui-Wen Vivian
2011-11-01
The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative importance of course criteria for the purpose of tailoring an optimal one-week immersion English as a second language (ESL) curriculum for elementary school students in a suburban county of Taiwan. The hierarchy model and AHP analysis utilized in the present study will be useful for resolving several important multi-criteria decision-making issues in planning and evaluating ESL programs. This study also offers valuable insights and provides a basis for further research in customizing ESL curriculum models for different student populations with distinct learning needs, goals, and socioeconomic backgrounds. Copyright © 2011 Elsevier Ltd. All rights reserved.
Feed-forward segmentation of figure-ground and assignment of border-ownership.
Supèr, Hans; Romeo, August; Keil, Matthias
2010-05-19
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.
Preface: Current perspectives in modelling, monitoring, and predicting geophysical fluid dynamics
NASA Astrophysics Data System (ADS)
Mancho, Ana M.; Hernández-García, Emilio; López, Cristóbal; Turiel, Antonio; Wiggins, Stephen; Pérez-Muñuzuri, Vicente
2018-02-01
The third edition of the international workshop Nonlinear Processes in Oceanic and Atmospheric Flows
was held at the Institute of Mathematical Sciences (ICMAT) in Madrid from 6 to 8 July 2016. The event gathered oceanographers, atmospheric scientists, physicists, and applied mathematicians sharing a common interest in the nonlinear dynamics of geophysical fluid flows. The philosophy of this meeting was to bring together researchers from a variety of backgrounds into an environment that favoured a vigorous discussion of concepts across different disciplines. The present Special Issue on Current perspectives in modelling, monitoring, and predicting geophysical fluid dynamics
contains selected contributions, mainly from attendants of the workshop, providing an updated perspective on modelling aspects of geophysical flows as well as issues on prediction and assimilation of observational data and novel tools for describing transport and mixing processes in these contexts. More details on these aspects are discussed in this preface.
Search for microscopic black holes in pp collisions at $$ \\sqrt{s}=8 $$ TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
2013-07-01
A search for microscopic black holes and string balls is presented, based on a data sample of pp collisions at sqrt(s) = 8 TeV recorded by the CMS experiment at the Large Hadron Collider and corresponding to an integrated luminosity of 12 inverse femtobarns. No excess of events with energetic multiparticle final states, typical of black hole production or of similar new physics processes, is observed. Given the agreement of the observations with the expected standard model background, which is dominated by QCD multijet production, 95% confidence limits are set on the production of semiclassical or quantum black holes, ormore » of string balls, corresponding to the exclusions of masses below 4.3 to 6.2 TeV, depending on model assumptions. In addition, model-independent limits are set on new physics processes resulting in energetic multiparticle final states.« less
Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership
Supèr, Hans; Romeo, August; Keil, Matthias
2010-01-01
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718
A Model for Oil-Gas Pipelines Cost Prediction Based on a Data Mining Process
NASA Astrophysics Data System (ADS)
Batzias, Fragiskos A.; Spanidis, Phillip-Mark P.
2009-08-01
This paper addresses the problems associated with the cost estimation of oil/gas pipelines during the elaboration of feasibility assessments. Techno-economic parameters, i.e., cost, length and diameter, are critical for such studies at the preliminary design stage. A methodology for the development of a cost prediction model based on Data Mining (DM) process is proposed. The design and implementation of a Knowledge Base (KB), maintaining data collected from various disciplines of the pipeline industry, are presented. The formulation of a cost prediction equation is demonstrated by applying multiple regression analysis using data sets extracted from the KB. Following the methodology proposed, a learning context is inductively developed as background pipeline data are acquired, grouped and stored in the KB, and through a linear regression model provide statistically substantial results, useful for project managers or decision makers.
Christianson, Kiel
2016-01-01
This paper contains an overview of language processing that can be described as "good enough", "underspecified", or "shallow". The central idea is that a nontrivial proportion of misunderstanding, misinterpretation, and miscommunication can be attributed not to random error, but instead to processing preferences of the human language processing system. In other words, the very architecture of the language processor favours certain types of processing errors because in a majority of instances, this "fast and frugal", less effortful processing is good enough to support communication. By way of historical background, connections are made between this relatively recent facet of psycholinguistic study, other recent language processing models, and related concepts in other areas of cognitive science. Finally, the nine papers included in this special issue are introduced as representative of novel explorations of good-enough, or underspecified, language processing.
Readout circuit with novel background suppression for long wavelength infrared focal plane arrays
NASA Astrophysics Data System (ADS)
Xie, L.; Xia, X. J.; Zhou, Y. F.; Wen, Y.; Sun, W. F.; Shi, L. X.
2011-02-01
In this article, a novel pixel readout circuit using a switched-capacitor integrator mode background suppression technique is presented for long wavelength infrared focal plane arrays. This circuit can improve dynamic range and signal-to-noise ratio by suppressing the large background current during integration. Compared with other background suppression techniques, the new background suppression technique is less sensitive to the process mismatch and has no additional shot noise. The proposed circuit is theoretically analysed and simulated while taking into account the non-ideal characteristics. The result shows that the background suppression non-uniformity is ultra-low even for a large process mismatch. The background suppression non-uniformity of the proposed circuit can also remain very small with technology scaling.
NASA Astrophysics Data System (ADS)
Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru
In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.
Argumentation in Science Education: A Model-based Framework
NASA Astrophysics Data System (ADS)
Böttcher, Florian; Meisert, Anke
2011-02-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
Model of flare lightcurve profile observed in soft X-rays
NASA Astrophysics Data System (ADS)
Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz
We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.
Blueprint XAS: a Matlab-based toolbox for the fitting and analysis of XAS spectra.
Delgado-Jaime, Mario Ulises; Mewis, Craig Philip; Kennepohl, Pierre
2010-01-01
Blueprint XAS is a new Matlab-based program developed to fit and analyse X-ray absorption spectroscopy (XAS) data, most specifically in the near-edge region of the spectrum. The program is based on a methodology that introduces a novel background model into the complete fit model and that is capable of generating any number of independent fits with minimal introduction of user bias [Delgado-Jaime & Kennepohl (2010), J. Synchrotron Rad. 17, 119-128]. The functions and settings on the five panels of its graphical user interface are designed to suit the needs of near-edge XAS data analyzers. A batch function allows for the setting of multiple jobs to be run with Matlab in the background. A unique statistics panel allows the user to analyse a family of independent fits, to evaluate fit models and to draw statistically supported conclusions. The version introduced here (v0.2) is currently a toolbox for Matlab. Future stand-alone versions of the program will also incorporate several other new features to create a full package of tools for XAS data processing.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Rott, C; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2008-05-30
We search for the standard model Higgs boson produced in association with an electroweak vector boson in events with no identified charged leptons, large imbalance in transverse momentum, and two jets where at least one contains a secondary vertex consistent with the decay of b hadrons. We use approximately 1 fb(-1) integrated luminosity of pp collisions at square root(s)=1.96 TeV recorded by the Collider Detector at Fermilab II experiment at the Tevatron. We find 268 (16) single (double) b-tagged candidate events, where 248+/-43 (14.4+/-2.7) are expected from standard model background processes. We observe no significant excess over the expected background and thus set 95% confidence level upper limits on the Higgs boson production cross section for several Higgs boson masses ranging from 110 to 140 GeV/c(2). For a mass of 115 GeV/c(2), the observed (expected) limit is 20.4 (14.2) times the standard model prediction.
Simulated cosmic microwave background maps at 0.5 deg resolution: Unresolved features
NASA Technical Reports Server (NTRS)
Kogut, A.; Hinshaw, G.; Bennett, C. L.
1995-01-01
High-contrast peaks in the cosmic microwave background (CMB) anisotropy can appear as unresolved sources to observers. We fit simluated CMB maps generated with a cold dark matter model to a set of unresolved features at instrumental resolution 0.5 deg-1.5 deg to derive the integral number density per steradian n (greater than absolute value of T) of features brighter than threshold temperature absolute value of T and compare the results to recent experiments. A typical medium-scale experiment observing 0.001 sr at 0.5 deg resolution would expect to observe one feature brighter than 85 micro-K after convolution with the beam profile, with less than 5% probability to observe a source brighter than 150 micro-K. Increasing the power-law index of primordial density perturbations n from 1 to 1.5 raises these temperature limits absolute value of T by a factor of 2. The MSAM features are in agreement with standard cold dark matter models and are not necessarily evidence for processes beyond the standard model.
Navas, Francisco Javier; Jordana, Jordi; León, José Manuel; Arando, Ander; Pizarro, Gabriela; McLean, Amy Katherine; Delgado, Juan Vicente
2017-08-01
New productive niches can offer new commercial perspectives linked to donkeys' products and human therapeutic or leisure applications. However, no assessment for selection criteria has been carried out yet. First, we assessed the animal inherent features and environmental factors that may potentially influence several cognitive processes in donkeys. Then, we aimed at describing a practical methodology to quantify such cognitive processes, seeking their inclusion in breeding and conservation programmes, through a multifactorial linear model. Sixteen cognitive process-related traits were scored on a problem-solving test in a sample of 300 Andalusian donkeys for three consecutive years from 2013 to 2015. The linear model assessed the influence and interactions of four environmental factors, sex as an animal-inherent factor, age as a covariable, and the interactions between these factors. Analyses of variance were performed with GLM procedure of SPSS Statistics for Windows, Version 24.0 software to assess the relative importance of each factor. All traits were significantly (P<0.05) affected by all factors in the model except for sex that was not significant for some of the cognitive processes, and stimulus which was not significant (P<0.05) for all of them except for the coping style related ones. The interaction between all factors within the model was non-significant (P<0.05) for almost all cognitive processes. The development of complex multifactorial models to study cognitive processes may counteract the inherent variability in behavior genetics and the estimation and prediction of related breeding parameters, key for the implementation of successful conservation programmes in apparently functionally misplaced endangered breeds. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.
1995-01-01
Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.
LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations
NASA Astrophysics Data System (ADS)
Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton
2016-12-01
Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.
Predicting the performance of linear optical detectors in free space laser communication links
NASA Astrophysics Data System (ADS)
Farrell, Thomas C.
2018-05-01
While the fundamental performance limit for optical communications is set by the quantum nature of light, in practical systems background light, dark current, and thermal noise of the electronics also degrade performance. In this paper, we derive a set of equations predicting the performance of PIN diodes and linear mode avalanche photo diodes (APDs) in the presence of such noise sources. Electrons generated by signal, background, and dark current shot noise are well modeled in PIN diodes as Poissonian statistical processes. In APDs, on the other hand, the amplifying effects of the device result in statistics that are distinctly non-Poissonian. Thermal noise is well modeled as Gaussian. In this paper, we appeal to the central limit theorem and treat both the variability of the signal and the sum of noise sources as Gaussian. Comparison against Monte-Carlo simulation of PIN diode performance (where we do model shot noise with draws from a Poissonian distribution) validates the legitimacy of this approximation. On-off keying, M-ary pulse position, and binary differential phase shift keying modulation are modeled. We conclude with examples showing how the equations may be used in a link budget to estimate the performance of optical links using linear receivers.
Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek
2018-01-01
Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213
Spatial-frequency spectrum of patterns changes the visibility of spatial-phase differences
NASA Technical Reports Server (NTRS)
Lawton, T. B.
1985-01-01
It is shown that spatial-frequency components over a 4-octave range affected the visibility of spatial-phase differences. Contrast thresholds were measured for discrimination between two (+45- and -45-deg) spatial phases of a sinusoidal test grating added to a background grating. The background could contain one or several sinusoidal components, all in 0-deg phase. Phase differences between the test and the background were visible at lower contrasts when test and background frequencies were harmonically related than when they were not, when test and background frequencies were within 1 octave than when they were farther apart, when the fundamental frequency of the background was low than when it was high, and for some discriminations more than for others, after practice. The visibility of phase differences was not affected by additional components in the background if the fundamental and difference frequencies of the background remained unchanged. Observers' reports of their strategies gave information about the types of attentive processing that were used to discriminate phase differences. Attentive processing facilitated phase discrimination for multifrequency gratings spanning a much wider range of spatial frequencies than would be possible by using only local preattentive processing. These results were consistent with the visibility of phase differences being processed by some combination of even- and odd-symmetric simple cells tuned to a wide range of different spatial frequencies.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.
2012-01-01
Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.
NASA Technical Reports Server (NTRS)
Zycki, Piotr T.; Zdziarski, Andrzej A.; Svensson, Roland
1991-01-01
We reconsider the recent model for the origin in the cosmic X-ray and gamma-ray background by Rogers and Field. The background in the model is due to an unresolved population of AGNs. An individual AGN spectrum contains three components: a power law with the energy index of alpha = 1.1, an enhanced reflection component, and a component from Compton scattering by relativistic electrons with a low energy cutoff at some minimum Lorentz factor, gamma(sub min) much greater than 1. The MeV bump seen in the gamma-ray background is then explained by inverse Compton emission by the electrons. We show that the model does not reproduce the shape of the observed X-ray and gamma-ray background below 10 MeV and that it overproduces the background at larger energies. Furthermore, we find the assumptions made for the Compton component to be physically inconsistent. Relaxing the inconsistent assumptions leads to model spectra even more different from that of the observed cosmic background. Thus, we can reject the hypothesis that the high-energy cosmic background is due to the described model.
Radiogenic and muon-induced backgrounds in the LUX dark matter detector
NASA Astrophysics Data System (ADS)
Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Bernard, E.; Bernstein, A.; Bradley, A.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chapman, J. J.; Chiller, A. A.; Chiller, C.; Coffey, T.; Currie, A.; de Viveiros, L.; Dobi, A.; Dobson, J.; Druszkiewicz, E.; Edwards, B.; Faham, C. H.; Fiorucci, S.; Flores, C.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C.; Hertel, S. A.; Horn, M.; Huang, D. Q.; Ihm, M.; Jacobsen, R. G.; Kazkaz, K.; Knoche, R.; Larsen, N. A.; Lee, C.; Lindote, A.; Lopes, M. I.; Malling, D. C.; Mannino, R.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H.; Neves, F.; Ott, R. A.; Pangilinan, M.; Parker, P. D.; Pease, E. K.; Pech, K.; Phelps, P.; Reichhart, L.; Shutt, T.; Silva, C.; Solovov, V. N.; Sorensen, P.; O'Sullivan, K.; Sumner, T. J.; Szydagis, M.; Taylor, D.; Tennyson, B.; Tiedt, D. R.; Tripathi, M.; Uvarov, S.; Verbus, J. R.; Walsh, N.; Webb, R.; White, J. T.; Witherell, M. S.; Wolfs, F. L. H.; Woods, M.; Zhang, C.
2015-03-01
The Large Underground Xenon (LUX) dark matter experiment aims to detect rare low-energy interactions from Weakly Interacting Massive Particles (WIMPs). The radiogenic backgrounds in the LUX detector have been measured and compared with Monte Carlo simulation. Measurements of LUX high-energy data have provided direct constraints on all background sources contributing to the background model. The expected background rate from the background model for the 85.3 day WIMP search run is (2.6 ±0.2stat ±0.4sys) ×10-3 events keVee-1 kg-1day-1 in a 118 kg fiducial volume. The observed background rate is (3.6 ±0.4stat) ×10-3 events keVee-1 kg-1day-1 , consistent with model projections. The expectation for the radiogenic background in a subsequent one-year run is presented.
A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-06-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-10-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colaneri, Luca
2017-04-01
With the experimental discovery of the Higgs boson, the Standard Model has been considered veri ed in all its previsions. The Standard Model, though, is still considered an incomplete theory, because it fails to address many theoretical and phenomenological issues. Among those, it doesn't provide any viable Dark Matter candidate. Many Beyond-Standard Model theories, such as the Supersymmetric Standard Model, provide possible solutions. In this work we have reported the experimental observations that led to considerate the existence of a new Force, mediated by a new massive vector boson, that could address all the observed phenomenology. This new dark Forcemore » could open an observational channel between the Standard Model and a new Dark Sector, convey by the interaction of the Standard Model photon with the massive dark photon, also called the A'. Purpose of this work was to develop an independent study of the background processes and the implementation of an independent event generator, to better understand the kinematics of the produced particles in the process e - +W → e - +W' + e + + e - and validate, or invalidate, the o cial event generator.« less
ERIC Educational Resources Information Center
Steinley, Gary
1989-01-01
Examines the processing order between the comprehension of a text and the use of comprehended ideas for such thinking tasks as comparing, evaluating, and problem solving. Finds that readers with limited background knowledge read in a more linear fashion than those with extensive background, who read in a parallel manner. (RS)
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Divakaran, Ajay; Xiong, Ziyou; Otsuka, Isao
2006-12-01
We propose a content-adaptive analysis and representation framework to discover events using audio features from "unscripted" multimedia such as sports and surveillance for summarization. The proposed analysis framework performs an inlier/outlier-based temporal segmentation of the content. It is motivated by the observation that "interesting" events in unscripted multimedia occur sparsely in a background of usual or "uninteresting" events. We treat the sequence of low/mid-level features extracted from the audio as a time series and identify subsequences that are outliers. The outlier detection is based on eigenvector analysis of the affinity matrix constructed from statistical models estimated from the subsequences of the time series. We define the confidence measure on each of the detected outliers as the probability that it is an outlier. Then, we establish a relationship between the parameters of the proposed framework and the confidence measure. Furthermore, we use the confidence measure to rank the detected outliers in terms of their departures from the background process. Our experimental results with sequences of low- and mid-level audio features extracted from sports video show that "highlight" events can be extracted effectively as outliers from a background process using the proposed framework. We proceed to show the effectiveness of the proposed framework in bringing out suspicious events from surveillance videos without any a priori knowledge. We show that such temporal segmentation into background and outliers, along with the ranking based on the departure from the background, can be used to generate content summaries of any desired length. Finally, we also show that the proposed framework can be used to systematically select "key audio classes" that are indicative of events of interest in the chosen domain.
The ITSG-Grace2014 Gravity Field Model
NASA Astrophysics Data System (ADS)
Kvas, Andreas; Mayer-Gürr, Torsten; Zehenter, Norbert; Klinger, Beate
2015-04-01
The ITSG-Grace2014 GRACE-only gravity field model consists of a high resolution unconstrained static model (up to degree 200) with trend and annual signal, monthly unconstrained solutions with different spatial resolutions as well as daily snapshots derived by using a Kalman smoother. Apart from the estimated spherical harmonic coefficients, full variance-covariance matrices for the monthly solutions and the static gravity field component are provided. Compared to the previous release, multiple improvements in the processing chain are implemented: updated background models, better ionospheric modeling for GPS observations, an improved satellite attitude by combination of star camera and angular accelerations, estimation of K-band antenna center variations within the gravity field recovery process as well as error covariance function determination. Furthermore, daily gravity field variations have been modeled in the adjustment process to reduce errors caused by temporal leakage. This combined estimation of daily gravity variations field variations together with the static gravity field component represents a computational challenge due to the significantly increased parameter count. The modeling of daily variations up to a spherical harmonic degree of 40 for the whole GRACE observation period results in a system of linear equations with over 6 million unknown gravity field parameters. A least squares adjustment of this size is not solvable in a sensible time frame, therefore measures to reduce the problem size have to be taken. The ITSG-Grace2014 release is presented and selected parts of the processing chain and their effect on the estimated gravity field solutions are discussed.
NASA Technical Reports Server (NTRS)
Douglass, A. R.; Stolarski, R. S.; Strahan, S. E.; Oman, L. D.
2012-01-01
Projections of future ozone levels are made using models that couple a general circulation model with a representation of atmospheric photochemical processes, allowing interactions among photochemical processes, radiation, and dynamics. Such models are known as chemistry and climate models (CCMs). Although developed from common principles and subject to the same boundary conditions, simulated ozone time series vary for projections of changes in ozone depleting substances (ODSs) and greenhouse gases. In the upper stratosphere photochemical processes control ozone level, and ozone increases as ODSs decrease and temperature decreases due to greenhouse gas increase. Simulations agree broadly but there are quantitative differences in the sensitivity of ozone to chlorine and to temperature. We obtain insight into these differences in sensitivity by examining the relationship between the upper stratosphere annual cycle of ozone and temperature as produced by a suite of models. All simulations conform to expectation in that ozone is less sensitive to temperature when chlorine levels are highest because chlorine catalyzed loss is nearly independent of temperature. Differences in sensitivity are traced to differences in simulated temperature, ozone and reactive nitrogen when chlorine levels are close to background. This work shows that differences in the importance of specific processes underlie differences in simulated sensitivity of ozone to composition change. This suggests a) the multi-model mean is not a best estimate of the sensitivity of upper ozone to changes in ODSs and temperature; b) the spread of values is not an appropriate measure of uncertainty.
NASA Astrophysics Data System (ADS)
Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi
2014-09-01
Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.
Emulsion droplet interactions: a front-tracking treatment
NASA Astrophysics Data System (ADS)
Mason, Lachlan; Juric, Damir; Chergui, Jalel; Shin, Seungwon; Craster, Richard V.; Matar, Omar K.
2017-11-01
Emulsion coalescence influences a multitude of industrial applications including solvent extraction, oil recovery and the manufacture of fast-moving consumer goods. Droplet interaction models are vital for the design and scale-up of processing systems, however predictive modelling at the droplet-scale remains a research challenge. This study simulates industrially relevant moderate-inertia collisions for which a high degree of droplet deformation occurs. A hybrid front-tracking/level-set approach is used to automatically account for interface merging without the need for `bookkeeping' of interface connectivity. The model is implemented in Code BLUE using a parallel multi-grid solver, allowing both film and droplet-scale dynamics to be resolved efficiently. Droplet interaction simulations are validated using experimental sequences from the literature in the presence and absence of background turbulence. The framework is readily extensible for modelling the influence of surfactants and non-Newtonian fluids on droplet interaction processes. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM), PETRONAS.
Detecting abandoned objects using interacting multiple models
NASA Astrophysics Data System (ADS)
Becker, Stefan; Münch, David; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael
2015-10-01
In recent years, the wide use of video surveillance systems has caused an enormous increase in the amount of data that has to be stored, monitored, and processed. As a consequence, it is crucial to support human operators with automated surveillance applications. Towards this end an intelligent video analysis module for real-time alerting in case of abandoned objects in public spaces is proposed. The overall processing pipeline consists of two major parts. First, person motion is modeled using an Interacting Multiple Model (IMM) filter. The IMM filter estimates the state of a person according to a finite-state, discrete-time Markov chain. Second, the location of persons that stay at a fixed position defines a region of interest, in which a nonparametric background model with dynamic per-pixel state variables identifies abandoned objects. In case of a detected abandoned object, an alarm event is triggered. The effectiveness of the proposed system is evaluated on the PETS 2006 dataset and the i-Lids dataset, both reflecting prototypical surveillance scenarios.
Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.
Sporns, O; Tononi, G; Edelman, G M
1991-01-01
The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws. Images PMID:1986358
Towards a unified theory of health-disease: II. Holopathogenesis
Almeida-Filho, Naomar
2014-01-01
This article presents a systematic framework for modeling several classes of illness-sickness-disease named as Holopathogenesis. Holopathogenesis is defined as processes of over-determination of diseases and related conditions taken as a whole, comprising selected facets of the complex object Health. First, a conceptual background of Holopathogenesis is presented as a series of significant interfaces (biomolecular-immunological, physiopathological-clinical, epidemiological-ecosocial). Second, propositions derived from Holopathogenesis are introduced in order to allow drawing the disease-illness-sickness complex as a hierarchical network of networks. Third, a formalization of intra- and inter-level correspondences, over-determination processes, effects and links of Holopathogenesis models is proposed. Finally, the Holopathogenesis frame is evaluated as a comprehensive theoretical pathology taken as a preliminary step towards a unified theory of health-disease. PMID:24897040
Opinion dynamics in a group-based society
NASA Astrophysics Data System (ADS)
Gargiulo, F.; Huet, S.
2010-09-01
Many models have been proposed to analyze the evolution of opinion structure due to the interaction of individuals in their social environment. Such models analyze the spreading of ideas both in completely interacting backgrounds and on social networks, where each person has a finite set of interlocutors. In this paper we analyze the reciprocal feedback between the opinions of the individuals and the structure of the interpersonal relationships at the level of community structures. For this purpose we define a group-based random network and we study how this structure co-evolves with opinion dynamics processes. We observe that the adaptive network structure affects the opinion dynamics process helping the consensus formation. The results also show interesting behaviors in regards to the size distribution of the groups and their correlation with opinion structure.
The study on knowledge transferring incentive for information system requirement development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yang
2015-03-10
Information system requirement development is a process of users’ knowledge sharing and transferring. However the tacit requirements developing is a main problem during requirement development process, for the reason of difficult to encoding, express, and communicate. Knowledge fusion and corporate effort is needed to finding tacit requirements. Under this background, our paper try to find out the rule of effort dynamic evolutionary of software developer and user by building an evolutionary game model on the condition of incentive system. And in addition this paper provides an in depth discussion at the end of this paper.
Detlefsen, Ellen G.
2002-01-01
This article explores the background of, and some of the current models for the education of, the individuals known as “informationists.” A definition, an historical overview, and a literature review are followed by a description of the current practices in a variety of institutions and organizations. A series of five “case reports” illustrates some of the possible tracks that individuals seeking education as informationists may follow. A proposal for a rigorous planning process is made, followed by a list of recommendations for this planning process. PMID:11838461
Anninos, Dionysios; Denef, Frederik
2016-06-30
We show that the late time Hartle-Hawking wave function for a free massless scalar in a fixed de Sitter background encodes a sharp ultrametric structure for the standard Euclidean distance on the space of field configurations. This implies a hierarchical, tree-like organization of the state space, reflecting its genesis as a branched diffusion process. In conclusion, an equivalent mathematical structure organizes the state space of the Sherrington-Kirkpatrick model of a spin glass.
Image Segmentation Using Minimum Spanning Tree
NASA Astrophysics Data System (ADS)
Dewi, M. P.; Armiati, A.; Alvini, S.
2018-04-01
This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
NASA Technical Reports Server (NTRS)
1975-01-01
The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.
A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.
Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing
2018-04-10
In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.
Impact of neutrino background prediction for next generation dark matter xenon detector
NASA Astrophysics Data System (ADS)
Cadeddu, M.; Picciau, E.
2018-01-01
Next generation direct dark matter detectors will have the sensitivity to detect neutrinos from several sources, among which atmospheric and diffuse supernova neutrinos, through the Standard Model reaction of Coherent Elastic Neutrino Scattering on nucleus. This reaction represents an irreducible background that can be expressed as a limit in the Weakly Interacting Massive Particles parameters plane. This limit is known as the “neutrino floor” and it has been obtained by other authors considering standard hypotheses for the neutrino-nucleus form factor and for the coherence of the scattering process. Since the coherent scattering has never been observed experimentally, it is licit to relax some hypotheses in the differential cross section and to evaluate the effect of such modifications on the neutrino floor prediction. In this contribution, we show a more accurate neutrino-nucleus form factor and we discuss the coherence hypothesis of the process in two extreme cases, namely the total coherence and the total decoherence regime. We derive the neutrino background event rate under these new assumptions, considering xenon as a target. The differences between the number of neutrino events and the implication for the next generation dark matter detectors, such as XENON1T/XENONnT, LZ and DARWIN, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qiuguang
Finding the standard model Higgs boson and discovering beyond-standard model physics phenomena have been the most important goals for the high-energy physics in the last decades. In this thesis, we present two such searches. First is the search for the low mass standard model Higgs boson produced in association with a vector boson; second is the rst search for a dark-matter candidate (D) produced in association with a top quark (t) in particle colliders. We search in events with energetic jets and large missing transverse energy { a signature characterized by complicated backgrounds { in data collected by the CDFmore » detector with proton-antiproton collisions at p s = 1:96 TeV. We discuss the techniques that have been developed for background modeling, for discriminating signal from background, and for reducing background resulting from detector e ects. In the Higgs search, we report the 95% con dence level upper limits on the pro- duction cross section across masses of 90 to 150 GeV/c2. The expected limits are improved by an average of 14% relative to the previous analysis. The Large Hadron Collider experiments reported a Higgs-like particle with mass of 125 GeV/c2 by study- ing the data collected in year 2011/12. At a Higgs boson mass of 125 GeV/c2, our observed (expected) limit is 3.06 (3.33) times the standard model prediction, corre- sponding to one of the most sensitive searches to date in this nal state. In the dark matter search, we nd the data are consistent with the standard model prediction, thus set 95% con dence level upper limits on the cross section of the process p p ! t + D as a function of the mass of the dark-matter candidate. The xviii upper limits are approximately 0.5 pb for a dark-matter particle with masses in the range of 0 150 GeV/c2.« less
Modeling low-temperature geochemical processes: Chapter 2
Nordstrom, D. Kirk; Campbell, Kate M.
2014-01-01
This chapter provides an overview of geochemical modeling that applies to water–rock interactions under ambient conditions of temperature and pressure. Topics include modeling definitions, historical background, issues of activity coefficients, popular codes and databases, examples of modeling common types of water–rock interactions, and issues of model reliability. Examples include speciation, microbial redox kinetics and ferrous iron oxidation, calcite dissolution, pyrite oxidation, combined pyrite and calcite dissolution, dedolomitization, seawater–carbonate groundwater mixing, reactive-transport modeling in streams, modeling catchments, and evaporation of seawater. The chapter emphasizes limitations to geochemical modeling: that a proper understanding and ability to communicate model results well are as important as completing a set of useful modeling computations and that greater sophistication in model and code development is not necessarily an advancement. If the goal is to understand how a particular geochemical system behaves, it is better to collect more field data than rely on computer codes.
Space-based infrared scanning sensor LOS determination and calibration using star observation
NASA Astrophysics Data System (ADS)
Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang
2015-10-01
This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.
An overview of the CellML API and its implementation
2010-01-01
Background CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models. However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. Results We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages. We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Conclusions Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions. PMID:20377909
NASA Astrophysics Data System (ADS)
Colette, A.; Ancellet, G.; Menut, L.; Arnold, S. R.
2006-03-01
The ozone variability observed by tropospheric ozone lidars during the ESCOMPTE campaign is analyzed by means of a hybrid-Lagrangian modeling study. Transport processes responsible for the formation of ozone-rich layers are identified using a semi-Lagrangian analysis of mesoscale simulations to identify the planetary boundary layer (PBL) footprint in the free troposphere. High ozone concentrations are related to polluted air masses exported from the Iberian PBL. The chemical composition of air masses coming from the PBL and transported in the free troposphere is evaluated using a Lagrangian chemistry model. The initial concentrations are provided by a model of chemistry and transport. Different scenarios are tested for the initial conditions and for the impact of mixing with background air in order to perform a quantitative comparison with the lidar observations. For this meteorological situation, the characteristic mixing time is of the order of 2 to 5 days depending on the initial conditions. Ozone is produced in the free troposphere within most air masses exported from the Iberian PBL at an average rate of 0.2 ppbv h-1, with a maximum ozone production of 0.4 ppbv h-1. Transport processes from the PBL are responsible for an increase of 13.3 ppbv of ozone concentrations in the free troposphere compared to background levels; about 45% of this increase is attributed to in situ production during the transport rather than direct export of ozone.
NASA Astrophysics Data System (ADS)
Colette, A.; Ancellet, G.; Menut, L.; Arnold, S. R.
2006-08-01
The ozone variability observed by tropospheric ozone lidars during the ESCOMPTE campaign is analyzed by means of a hybrid-Lagrangian modeling study. Transport processes responsible for the formation of ozone-rich layers are identified using a semi-Lagrangian analysis of mesoscale simulations to identify the planetary boundary layer (PBL) footprint in the free troposphere. High ozone concentrations are related to polluted air masses exported from the Iberian PBL. The chemical composition of air masses coming from the PBL and transported in the free troposphere is evaluated using a Lagrangian chemistry model. The initial concentrations are provided by a model of chemistry and transport. Different scenarios are tested for the initial conditions and for the impact of mixing with background air in order to perform a quantitative comparison with the lidar observations. For this meteorological situation, the characteristic mixing time is of the order of 2 to 6 days depending on the initial conditions. Ozone is produced in the free troposphere within most air masses exported from the Iberian PBL at an average rate of 0.2 ppbv h-1, with a maximum ozone production of 0.4 ppbv h-1. Transport processes from the PBL are responsible for an increase of 13.3 ppbv of ozone concentrations in the free troposphere compared to background levels; about 45% of this increase is attributed to in situ production during the transport rather than direct export of ozone.
An Evaluation of Understandability of Patient Journey Models in Mental Health
2016-01-01
Background There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. Objectives This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Method Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. Results The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. Conclusions The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers. PMID:27471006
Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data
Agnese, R.
2015-03-30
We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less
Morciano, Patrizia; Iorio, Roberto; Iovino, Daniela; Cipressa, Francesca; Esposito, Giuseppe; Porrazzo, Antonella; Satta, Luigi; Alesse, Edoardo; Tabocchini, Maria Antonella; Cenci, Giovanni
2018-01-01
Natural background radiation of Earth and cosmic rays played a relevant role during the evolution of living organisms. However, how chronic low doses of radiation can affect biological processes is still unclear. Previous data have indicated that cells grown at the Gran Sasso Underground Laboratory (LNGS, L'Aquila) of National Institute of Nuclear Physics (INFN) of Italy, where the dose rate of cosmic rays and neutrons is significantly reduced with respect to the external environment, elicited an impaired response against endogenous damage as compared to cells grown outside LNGS. This suggests that environmental radiation contributes to the development of defense mechanisms at cellular level. To further understand how environmental radiation affects metabolism of living organisms, we have recently launched the FLYINGLOW program that aims at exploiting Drosophila melanogaster as a model for evaluating the effects of low doses/dose rates of radiation at the organismal level. Here, we will present a comparative data set on lifespan, motility and fertility from different Drosophila strains grown in parallel at LNGS and in a reference laboratory at the University of L'Aquila. Our data suggest the reduced radiation environment can influence Drosophila development and, depending on the genetic background, may affect viability for several generations even when flies are moved back to normal background radiation. As flies are considered a valuable model for human biology, our results might shed some light on understanding the effect of low dose radiation also in humans. © 2017 Wiley Periodicals, Inc.
GeneFisher-P: variations of GeneFisher as processes in Bio-jETI
Lamprecht, Anna-Lena; Margaria, Tiziana; Steffen, Bernhard; Sczyrba, Alexander; Hartmeier, Sven; Giegerich, Robert
2008-01-01
Background PCR primer design is an everyday, but not trivial task requiring state-of-the-art software. We describe the popular tool GeneFisher and explain its recent restructuring using workflow techniques. We apply a service-oriented approach to model and implement GeneFisher-P, a process-based version of the GeneFisher web application, as a part of the Bio-jETI platform for service modeling and execution. We show how to introduce a flexible process layer to meet the growing demand for improved user-friendliness and flexibility. Results Within Bio-jETI, we model the process using the jABC framework, a mature model-driven, service-oriented process definition platform. We encapsulate remote legacy tools and integrate web services using jETI, an extension of the jABC for seamless integration of remote resources as basic services, ready to be used in the process. Some of the basic services used by GeneFisher are in fact already provided as individual web services at BiBiServ and can be directly accessed. Others are legacy programs, and are made available to Bio-jETI via the jETI technology. The full power of service-based process orientation is required when more bioinformatics tools, available as web services or via jETI, lead to easy extensions or variations of the basic process. This concerns for instance variations of data retrieval or alignment tools as provided by the European Bioinformatics Institute (EBI). Conclusions The resulting service- and process-oriented GeneFisher-P demonstrates how basic services from heterogeneous sources can be easily orchestrated in the Bio-jETI platform and lead to a flexible family of specialized processes tailored to specific tasks. PMID:18460174
Recognition and characterization of unstructured environmental sounds
NASA Astrophysics Data System (ADS)
Chu, Selina
2011-12-01
Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.
O'Grady, Laura A; Witteman, Holly; Wathen, C Nadine
2008-01-01
Background First generation Internet technologies such as mailing lists or newsgroups afforded unprecedented levels of information exchange within a variety of interest groups, including those who seek health information. With emergence of the World Wide Web many communication applications were ported to web browsers. One of the driving factors in this phenomenon has been the exchange of experiential or anecdotal knowledge that patients share online, and there is emerging evidence that participation in these forums may be having an impact on people's health decision making. Theoretical frameworks supporting this form of information seeking and learning have yet to be proposed. Results In this article, we propose an adaptation of Kolb's experiential learning theory to begin to formulate an experiential health information processing model that may contribute to our understanding of online health information seeking behaviour in this context. Conclusion An experiential health information processing model is proposed that can be used as a research framework. Future research directions include investigating the utility of this model in the online health information seeking context, studying the impact of collaborating in these online environments on patient decision making and on health outcomes are provided. PMID:19087353
Asymmetric capture of Dirac dark matter by the Sun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan
2015-08-18
Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles andmore » anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.« less
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Howard, A.; Cheng, Y.; Dubovikov, M. S.
1999-01-01
We develop and test a 1-point closure turbulence model with the following features: 1) we include the salinity field and derive the expression for the vertical turbulent diffusivities of momentum K(sub m) , heat K(sub h) and salt K(sub s) as a function of two stability parameters: the Richardson number R(sub i) (stratification vs. shear) and the Turner number R(sub rho) (salinity gradient vs. temperature gradient). 2) to describe turbulent mixing below the mixed layer (ML), all previous models have adopted three adjustable "background diffusivities" for momentum, heat and salt. We propose a model that avoids such adjustable diffusivities. We assume that below the ML, the three diffusivities have the same functional dependence on R( sub i) and R(sub rho) as derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking.measured by Gargett et al. The procedure frees the model from adjustable background diffusivities and indeed we employ the same model throughout the entire vertical extent of the ocean. 3) in the local model, the turbulent diffusivities K(sub m,h,s) are given as analytical functions of R(sub i) and R(sub rho). 5) the model is used in an O-GCM and several results are presented to exhibit the effect of double diffusion processes. 6) the code is available upon request.
Banks, Gareth; Heise, Ines; Starbuck, Becky; Osborne, Tamzin; Wisby, Laura; Potter, Paul; Jackson, Ian J.; Foster, Russell G.; Peirson, Stuart N.; Nolan, Patrick M.
2015-01-01
The circadian system is entrained to the environmental light/dark cycle via retinal photoreceptors and regulates numerous aspects of physiology and behavior, including sleep. These processes are all key factors in healthy aging showing a gradual decline with age. Despite their importance, the exact mechanisms underlying this decline are yet to be fully understood. One of the most effective tools we have to understand the genetic factors underlying these processes are genetically inbred mouse strains. The most commonly used reference mouse strain is C57BL/6J, but recently, resources such as the International Knockout Mouse Consortium have started producing large numbers of mouse mutant lines on a pure genetic background, C57BL/6N. Considering the substantial genetic diversity between mouse strains we expect there to be phenotypic differences, including differential effects of aging, in these and other strains. Such differences need to be characterized not only to establish how different mouse strains may model the aging process but also to understand how genetic background might modify age-related phenotypes. To ascertain the effects of aging on sleep/wake behavior, circadian rhythms, and light input and whether these effects are mouse strain-dependent, we have screened C57BL/6J, C57BL/6N, C3H-HeH, and C3H-Pde6b+ mouse strains at 5 ages throughout their life span. Our data show that sleep, circadian, and light input parameters are all disrupted by the aging process. Moreover, we have cataloged a number of strain-specific aging effects, including the rate of cataract development, decline in the pupillary light response, and changes in sleep fragmentation and the proportion of time spent asleep. PMID:25179226
Banks, Gareth; Heise, Ines; Starbuck, Becky; Osborne, Tamzin; Wisby, Laura; Potter, Paul; Jackson, Ian J; Foster, Russell G; Peirson, Stuart N; Nolan, Patrick M
2015-01-01
The circadian system is entrained to the environmental light/dark cycle via retinal photoreceptors and regulates numerous aspects of physiology and behavior, including sleep. These processes are all key factors in healthy aging showing a gradual decline with age. Despite their importance, the exact mechanisms underlying this decline are yet to be fully understood. One of the most effective tools we have to understand the genetic factors underlying these processes are genetically inbred mouse strains. The most commonly used reference mouse strain is C57BL/6J, but recently, resources such as the International Knockout Mouse Consortium have started producing large numbers of mouse mutant lines on a pure genetic background, C57BL/6N. Considering the substantial genetic diversity between mouse strains we expect there to be phenotypic differences, including differential effects of aging, in these and other strains. Such differences need to be characterized not only to establish how different mouse strains may model the aging process but also to understand how genetic background might modify age-related phenotypes. To ascertain the effects of aging on sleep/wake behavior, circadian rhythms, and light input and whether these effects are mouse strain-dependent, we have screened C57BL/6J, C57BL/6N, C3H-HeH, and C3H-Pde6b+ mouse strains at 5 ages throughout their life span. Our data show that sleep, circadian, and light input parameters are all disrupted by the aging process. Moreover, we have cataloged a number of strain-specific aging effects, including the rate of cataract development, decline in the pupillary light response, and changes in sleep fragmentation and the proportion of time spent asleep. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.
2017-12-01
Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.
Learning Modifies Odor Mixture Processing to Improve Detection of Relevant Components
Chen, Jen-Yung; Marachlian, Emiliano; Assisi, Collins; Huerta, Ramon; Smith, Brian H.
2015-01-01
Honey bees have a rich repertoire of olfactory learning behaviors, and they therefore are an excellent model to study plasticity in olfactory circuits. Recent behavioral, physiological, and molecular evidence suggested that the antennal lobe, the first relay of the olfactory system in insects and analog to the olfactory bulb in vertebrates, is involved in associative and nonassociative olfactory learning. Here we use calcium imaging to reveal how responses across antennal lobe projection neurons change after association of an input odor with appetitive reinforcement. After appetitive conditioning to 1-hexanol, the representation of an odor mixture containing 1-hexanol becomes more similar to this odor and less similar to the background odor acetophenone. We then apply computational modeling to investigate how changes in synaptic connectivity can account for the observed plasticity. Our study suggests that experience-dependent modulation of inhibitory interactions in the antennal lobe aids perception of salient odor components mixed with behaviorally irrelevant background odors. PMID:25568113
Plasma Synthesis and Sintering of Advanced Ceramics
1990-09-15
CONTENTS Page LIST OF TABLES iv OBJECTIVES 1 COLLOIDAL PLASMA PROCESSING: CONCEPTS 1 BACKGROUND 2 Ultrafine Particles 2 Colloidal Plasma 3 Particle...colloidal plasma processing of ceramics. COLLOIDAL PLASMA PROCESSING: CONCEPTS It is well known that ultrafine particles prepared in gas plasmas agglomerate...BACKGROUND Ultrafine Particles . There are well recognized advantages to using small particles in ceramic processing. The instantaneous densification
NASA Astrophysics Data System (ADS)
Liu, Ruo-Yu; Murase, Kohta; Inoue, Susumu; Ge, Chong; Wang, Xiang-Yu
2018-05-01
Various observations are revealing the widespread occurrence of fast and powerful winds in active galactic nuclei (AGNs) that are distinct from relativistic jets, likely launched from accretion disks and interacting strongly with the gas of their host galaxies. During the interaction, strong shocks are expected to form that can accelerate nonthermal particles to high energies. Such winds have been suggested to be responsible for a large fraction of the observed extragalactic gamma-ray background (EGB) and the diffuse neutrino background, via the decay of neutral and charged pions generated in inelastic pp collisions between protons accelerated by the forward shock and the ambient gas. However, previous studies did not properly account for processes such as adiabatic losses that may reduce the gamma-ray and neutrino fluxes significantly. We evaluate the production of gamma rays and neutrinos by AGN-driven winds in detail by modeling their hydrodynamic and thermal evolution, including the effects of their two-temperature structure. We find that they can only account for less than ∼30% of the EGB flux, as otherwise the model would violate the independent upper limit derived from the diffuse isotropic gamma-ray background. If the neutrino spectral index is steep with Γ ≳ 2.2, a severe tension with the isotropic gamma-ray background would arise as long as the winds contribute more than 20% of the IceCube neutrino flux in the 10–100 TeV range. At energies ≳ 100 TeV, we find that the IceCube neutrino flux may still be accountable by AGN-driven winds if the spectral index is as small as Γ ∼ 2.0–2.1.
NASA Astrophysics Data System (ADS)
Mathur, R.; Kang, D.; Napelenok, S. L.; Xing, J.; Hogrefe, C.
2017-12-01
Air pollution reduction strategies for a region are complicated not only by the interplay of local emissions sources and several complex physical, chemical, dynamical processes in the atmosphere, but also hemispheric background levels of pollutants. Contrasting changes in emission patterns across the globe (e.g. declining emissions in North America and Western Europe in response to implementation of control measures and increasing emissions across Asia due to economic and population growth) are resulting in heterogeneous changes in the tropospheric chemical composition and are likely altering long-range transport impacts and consequently background pollution levels at receptor regions. To quantify these impacts, the WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations are performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the Northern Hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ, a first- and higher-order sensitivity calculation technique, is used to estimate the sensitivity of O3 to emissions from different source regions across the Northern Hemisphere. The seasonal variations in source region contributions to background O3 are then estimated from these sensitivity calculations and will be discussed. These source region sensitivities estimated from DDM are then combined with the multi-decadal simulations of O3 distributions and emissions trends to characterize the changing contributions of different source regions to background O3 levels across North America. This characterization of changing long-range transport contributions is critical for the design and implementation of tighter national air quality standards
NASA Astrophysics Data System (ADS)
Huang, Z.; Jia, X.; Rubin, M.; Fougere, N.; Gombosi, T. I.; Tenishev, V.; Combi, M. R.; Bieler, A. M.; Toth, G.; Hansen, K. C.; Shou, Y.
2014-12-01
We study the plasma environment of the comet Churyumov-Gerasimenko, which is the target of the Rosetta mission, by performing large scale numerical simulations. Our model is based on BATS-R-US within the Space Weather Modeling Framework that solves the governing multifluid MHD equations, which describe the behavior of the cometary heavy ions, the solar wind protons, and electrons. The model includes various mass loading processes, including ionization, charge exchange, dissociative ion-electron recombination, as well as collisional interactions between different fluids. The neutral background used in our MHD simulations is provided by a kinetic Direct Simulation Monte Carlo (DSMC) model. We will simulate how the cometary plasma environment changes at different heliocentric distances.
Cultural Artifact Detection in Long Wave Infrared Imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric
2017-01-01
Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are appliedmore » to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.« less
Land Ecological Security Evaluation of Underground Iron Mine Based on PSR Model
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Chen, Yong; Ruan, Jinghua; Hong, Qiang; Gan, Yong
2018-01-01
Iron ore mine provides an important strategic resource to the national economy while it also causes many serious ecological problems to the environment. The study summed up the characteristics of ecological environment problems of underground iron mine. Considering the mining process of underground iron mine, we analysis connections between mining production, resource, environment and economical background. The paper proposed a land ecological security evaluation system and method of underground iron mine based on Pressure-State-Response model. Our application in Chengchao iron mine proves its efficiency and promising guide on land ecological security evaluation.
Postglacial rebound with a non-Newtonian upper mantle and a Newtonian lower mantle rheology
NASA Technical Reports Server (NTRS)
Gasperini, Paolo; Yuen, David A.; Sabadini, Roberto
1992-01-01
A composite rheology is employed consisting of both linear and nonlinear creep mechanisms which are connected by a 'transition' stress. Background stress due to geodynamical processes is included. For models with a non-Newtonian upper-mantle overlying a Newtonian lower-mantle, the temporal responses of the displacements can reproduce those of Newtonian models. The average effective viscosity profile under the ice-load at the end of deglaciation turns out to be the crucial factor governing mantle relaxation. This can explain why simple Newtonian rheology has been successful in fitting the uplift data over formerly glaciated regions.
Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation
2010-01-01
1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is
EvoBuild: A Quickstart Toolkit for Programming Agent-Based Models of Evolutionary Processes
NASA Astrophysics Data System (ADS)
Wagh, Aditi; Wilensky, Uri
2018-04-01
Extensive research has shown that one of the benefits of programming to learn about scientific phenomena is that it facilitates learning about mechanisms underlying the phenomenon. However, using programming activities in classrooms is associated with costs such as requiring additional time to learn to program or students needing prior experience with programming. This paper presents a class of programming environments that we call quickstart: Environments with a negligible threshold for entry into programming and a modest ceiling. We posit that such environments can provide benefits of programming for learning without incurring associated costs for novice programmers. To make this claim, we present a design-based research study conducted to compare programming models of evolutionary processes with a quickstart toolkit with exploring pre-built models of the same processes. The study was conducted in six seventh grade science classes in two schools. Students in the programming condition used EvoBuild, a quickstart toolkit for programming agent-based models of evolutionary processes, to build their NetLogo models. Students in the exploration condition used pre-built NetLogo models. We demonstrate that although students came from a range of academic backgrounds without prior programming experience, and all students spent the same number of class periods on the activities including the time students took to learn programming in this environment, EvoBuild students showed greater learning about evolutionary mechanisms. We discuss the implications of this work for design research on programming environments in K-12 science education.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya
2018-01-01
Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244
Fröhlich, Christina; Paarmann, Kristin; Steffen, Johannes; Stenzel, Jan; Krohn, Markus; Heinze, Hans-Jochen; Pahnke, Jens
2013-03-01
Alzheimer's disease (AD) is by far the most common neurodegenerative disease. AD is histologically characterized not only by extracellular senile plaques and vascular deposits consisting of β-amyloid (Aβ) but also by accompanying neuroinflammatory processes involving the brain's microglia. The importance of the microglia is still in controversial discussion, which currently favors a protective function in disease progression. Recent findings by different research groups highlighted the importance of strain-specific and mitochondria-specific genomic variations in mouse models of cerebral β-amyloidosis. Here, we want to summarize our previously presented data and add new results that draw attention towards the consideration of strain-specific genomic alterations in the setting of APP transgenes. We present data from APP-transgenic mice in commonly used C57Bl/6J and FVB/N genomic backgrounds and show a direct influence on the kinetics of Aβ deposition and the activity of resident microglia. Plaque size, plaque deposition rate and the total amount of Aβ are highest in C57Bl/6J mice as compared to the FVB/N genomic background, which can be explained at least partially by a reduced microglia activity towards amyloid deposits in the C57BL/6J strain.
Two-sample discrimination of Poisson means
NASA Technical Reports Server (NTRS)
Lampton, M.
1994-01-01
This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.
Pan, Guangbo; Xu, Youpeng; Yu, Zhihui; Song, Song; Zhang, Yuan
2015-05-01
Maintaining the health of the river ecosystem is an essential ecological and environmental guarantee for regional sustainable development and one of the basic objectives in water resource management. With the rapid development of urbanization, the river health situation is deteriorating, especially in urban areas. The river health evaluation is a complex process that involves various natural and social components; eight eco-hydrological indicators were selected to establish an evaluation system, and the variation of river health status under the background of urbanization was explored based on entropy weight and matter-element model. The comprehensive correlative degrees of urban river health of Huzhou City in 2001, 2006 and 2010 were then calculated. The results indicated that river health status of the study area was in the direction of pathological trend, and the impact of limiting factors (such as Shannon's diversity index and agroforestry output growth rate) played an important role in river health. The variation of maximum correlative degree could be classified into stationary status, deterioration status, deterioration-to-improvement status, and improvement-to-deterioration status. There was a severe deterioration situation of river health under the background of urbanization. Copyright © 2015 Elsevier Inc. All rights reserved.
Spatiotemporal models for the simulation of infrared backgrounds
NASA Astrophysics Data System (ADS)
Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang
1992-09-01
It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a sufficiently accurate mathematical model for synthesizing a similar IR background for different times of the day. Original and synthetic IR backgrounds will be presented. Previous research in simulating IR backgrounds was performed by Strenzwilk, et al., Botkin, et al., and Rapp. The most recent work of Strenzwilk, et al. was based on the use of one-dimensional ARMA models for synthesizing the images. Their results were able to retain the global statistical and spectral behavior of the original image, but the synthetic image was not visually very similar to the original. The research presented in this paper is the result of an attempt to improve upon their results, and represents a significant improvement in quality over previously obtained results.
Detecting aseismic strain transients from seismicity data
Llenos, A.L.; McGuire, J.J.
2011-01-01
Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.
Inviscid Limit for Damped and Driven Incompressible Navier-Stokes Equations in mathbb R^2
NASA Astrophysics Data System (ADS)
Ramanah, D.; Raghunath, S.; Mee, D. J.; Rösgen, T.; Jacobs, P. A.
2007-08-01
Experiments to demonstrate the use of the background-oriented schlieren (BOS) technique in hypersonic impulse facilities are reported. BOS uses a simple optical set-up consisting of a structured background pattern, an electronic camera with a high shutter speed and a high intensity light source. The visualization technique is demonstrated in a small reflected shock tunnel with a Mach 4 conical nozzle, nozzle supply pressure of 2.2 MPa and nozzle supply enthalpy of 1.8 MJ/kg. A 20° sharp circular cone and a model of the MUSES-C re-entry body were tested. Images captured were processed using PIV-style image analysis to visualize variations in the density field. The shock angle on the cone measured from the BOS images agreed with theoretical calculations to within 0.5°. Shock standoff distances could be measured from the BOS image for the re-entry body. Preliminary experiments are also reported in higher enthalpy facilities where flow luminosity can interfere with imaging of the background pattern.
Quasi-normal modes from non-commutative matrix dynamics
NASA Astrophysics Data System (ADS)
Aprile, Francesco; Sanfilippo, Francesco
2017-09-01
We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.
Preston, Stephen D.; Alexander, Richard B.; Woodside, Michael D.
2011-01-01
The U.S. Geological Survey (USGS) recently completed assessments of stream nutrients in six major regions extending over much of the conterminous United States. SPARROW (SPAtially Referenced Regressions On Watershed attributes) models were developed for each region to explain spatial patterns in monitored stream nutrient loads in relation to human activities and natural resources and processes. The model information, reported by stream reach and catchment, provides contrasting views of the spatial patterns of nutrient source contributions, including those from urban (wastewater effluent and diffuse runoff from developed land), agricultural (farm fertilizers and animal manure), and specific background sources (atmospheric nitrogen deposition, soil phosphorus, forest nitrogen fixation, and channel erosion).
Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi
2017-01-01
Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.
Galileo spacecraft autonomous attitude determination using a V-slit star scanner
NASA Technical Reports Server (NTRS)
Mobasser, Sohrab; Lin, Shuh-Ren
1991-01-01
The autonomous attitude determination system of Galileo spacecraft, consisting of a radiation hardened star scanner and a processing algorithm is presented. The algorithm applying to this system are the sequential star identification and attitude estimation. The star scanner model is reviewed in detail and the flight software parameters that must be updated frequently during flight, due to degradation of the scanner response and the star background change are identified.
Aircraft vulnerability analysis by modeling and simulation
NASA Astrophysics Data System (ADS)
Willers, Cornelius J.; Willers, Maria S.; de Waal, Alta
2014-10-01
Infrared missiles pose a significant threat to civilian and military aviation. ManPADS missiles are especially dangerous in the hands of rogue and undisciplined forces. Yet, not all the launched missiles hit their targets; the miss being either attributable to misuse of the weapon or to missile performance restrictions. This paper analyses some of the factors affecting aircraft vulnerability and demonstrates a structured analysis of the risk and aircraft vulnerability problem. The aircraft-missile engagement is a complex series of events, many of which are only partially understood. Aircraft and missile designers focus on the optimal design and performance of their respective systems, often testing only in a limited set of scenarios. Most missiles react to the contrast intensity, but the variability of the background is rarely considered. Finally, the vulnerability of the aircraft depends jointly on the missile's performance and the doctrine governing the missile's launch. These factors are considered in a holistic investigation. The view direction, altitude, time of day, sun position, latitude/longitude and terrain determine the background against which the aircraft is observed. Especially high gradients in sky radiance occur around the sun and on the horizon. This paper considers uncluttered background scenes (uniform terrain and clear sky) and presents examples of background radiance at all view angles across a sphere around the sensor. A detailed geometrical and spatially distributed radiometric model is used to model the aircraft. This model provides the signature at all possible view angles across the sphere around the aircraft. The signature is determined in absolute terms (no background) and in contrast terms (with background). It is shown that the background significantly affects the contrast signature as observed by the missile sensor. A simplified missile model is constructed by defining the thrust and mass profiles, maximum seeker tracking rate, maximum guidance acceleration and seeker sensitivity. For the purpose of this investigation the aircraft is equipped with conventional pyrotechnic decoy flares and the missile has no counter-countermeasure means (security restrictions on open publication). This complete simulation is used to calculate the missile miss distance, when the missile is launched from different locations around the aircraft. The miss distance data is then graphically presented showing miss distance (aircraft vulnerability) as a function of launch direction and range. The aircraft vulnerability graph accounts for aircraft and missile characteristics, but does not account for missile deployment doctrine. A Bayesian network is constructed to fuse the doctrinal rules with the aircraft vulnerability data. The Bayesian network now provides the capability to evaluate the combined risk of missile launch and aircraft vulnerability. It is shown in this paper that it is indeed possible to predict the aircraft vulnerability to missile attack in a comprehensive modelling and a holistic process. By using the appropriate real-world models, this approach is used to evaluate the effectiveness of specific countermeasure techniques against specific missile threats. The use of a Bayesian network provides the means to fuse simulated performance data with more abstract doctrinal rules to provide a realistic assessment of the aircraft vulnerability.
Engineering stromal-epithelial interactions in vitro for ...
Background: Crosstalk between epithelial and stromal cells drives the morphogenesis of ectodermal organs during development and promotes normal mature adult epithelial tissue function. Epithelial-mesenchymal interactions (EMIs) have been examined using mammalian models, ex vivo tissue recombination, and in vitro co-cultures. Although these approaches have elucidated signaling mechanisms underlying morphogenetic processes and adult mammalian epithelial tissue function, they are limited by the availability of human tissue, low throughput, and human developmental or physiological relevance. Objectives: Bioengineering strategies to promote EMIs using human epithelial and mesenchymal cells have enabled the development of human in vitro models of adult epidermal and glandular tissues. In this review, we describe recent bioengineered models of human epithelial tissue and organs that can instruct the design of organotypic models of human developmental processes.Methods: We reviewed current bioengineering literature and here describe how bioengineered EMIs have enabled the development of human in vitro epithelial tissue models.Discussion: Engineered models to promote EMIs have recapitulated the architecture, phenotype, and function of adult human epithelial tissue, and similar engineering principles could be used to develop models of developmental morphogenesis. We describe how bioengineering strategies including bioprinting and spheroid culture could be implemented to
Background to new entrant safety fitness assurance process
DOT National Transportation Integrated Search
2000-03-01
This report presents the results of background research leading to the development of a New Entrant Safety Fitness Assurance Process, a prequalification and monitoring program for motor carriers entering interstate service. The New Entrant Safety Fit...
To BG or not to BG: Background Subtraction for EIT Coronal Loops
NASA Astrophysics Data System (ADS)
Beene, J. E.; Schmelz, J. T.
2003-05-01
One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.
Bruner-Tran, Kaylon L.; Mokshagundam, Shilpa; Herington, Jennifer L.; Ding, Tianbing; Osteen, Kevin G.
2018-01-01
Background: Although it has been more than a century since endometriosis was initially described in the literature, understanding the etiology and natural history of the disease has been challenging. However, the broad utility of murine and rat models of experimental endometriosis has enabled the elucidation of a number of potentially targetable processes which may otherwise promote this disease. Objective: To review a variety of studies utilizing rodent models of endometriosis to illustrate their utility in examining mechanisms associated with development and progression of this disease. Results: Use of rodent models of endometriosis has provided a much broader understanding of the risk factors for the initial development of endometriosis, the cellular pathology of the disease and the identification of potential therapeutic targets. Conclusion: Although there are limitations with any animal model, the variety of experimental endometriosis models that have been developed has enabled investigation into numerous aspects of this disease. Thanks to these models, our under-standing of the early processes of disease development, the role of steroid responsiveness, inflammatory processes and the peritoneal environment has been advanced. More recent models have begun to shed light on how epigenetic alterations con-tribute to the molecular basis of this disease as well as the multiple comorbidities which plague many patients. Continued de-velopments of animal models which aid in unraveling the mechanisms of endometriosis development provide the best oppor-tunity to identify therapeutic strategies to prevent or regress this enigmatic disease.
Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling
NASA Astrophysics Data System (ADS)
McCullough, C.; Bettadpur, S. V.
2016-12-01
Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.
Zhang, Jian-Hua; Böhme, Johann F
2007-11-01
In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.
3D molecular models of whole HIV-1 virions generated with cellPACK
Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Study of the CP-violating effects with gg → Η → τ{sup +}τ{sup –} process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belyaev, N. L., E-mail: nbelyaev@cern.ch; Konoplich, R. V.
Study of the gg → Η → τ{sup +}τ{sup –} process was performed at Monte Carlo level within the framework of searching for CP-violating effects. The sensitivity of chosen observables to CP-parity of the Higgs boson was demonstrated for hadronic 1-prong τ decays (τ{sup ±} → π{sup ±}, ρ{sup ±}). Monte Carlo samples for the gg → Η → τ{sup +}τ{sup -} process were generated including the parton hadronisation to final state particles. This generation was performed for the Standard Model Higgs boson, the pseudoscalar Higgs boson, the Z → τ{sup +}τ{sup –} background, and mixed CP-states of the Higgsmore » boson.« less
2010-01-01
Background The challenge today is to develop a modeling and simulation paradigm that integrates structural, molecular and genetic data for a quantitative understanding of physiology and behavior of biological processes at multiple scales. This modeling method requires techniques that maintain a reasonable accuracy of the biological process and also reduces the computational overhead. This objective motivates the use of new methods that can transform the problem from energy and affinity based modeling to information theory based modeling. To achieve this, we transform all dynamics within the cell into a random event time, which is specified through an information domain measure like probability distribution. This allows us to use the “in silico” stochastic event based modeling approach to find the molecular dynamics of the system. Results In this paper, we present the discrete event simulation concept using the example of the signal transduction cascade triggered by extra-cellular Mg2+ concentration in the two component PhoPQ regulatory system of Salmonella Typhimurium. We also present a model to compute the information domain measure of the molecular transport process by estimating the statistical parameters of inter-arrival time between molecules/ions coming to a cell receptor as external signal. This model transforms the diffusion process into the information theory measure of stochastic event completion time to get the distribution of the Mg2+ departure events. Using these molecular transport models, we next study the in-silico effects of this external trigger on the PhoPQ system. Conclusions Our results illustrate the accuracy of the proposed diffusion models in explaining the molecular/ionic transport processes inside the cell. Also, the proposed simulation framework can incorporate the stochasticity in cellular environments to a certain degree of accuracy. We expect that this scalable simulation platform will be able to model more complex biological systems with reasonable accuracy to understand their temporal dynamics. PMID:21143785
Image Discrimination Models Predict Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1994-01-01
Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.
A four phase development model for integrated care services in the Netherlands
Minkman, Mirella MN; Ahaus, Kees TB; Huijsman, Robbert
2009-01-01
Background Multidisciplinary and interorganizational arrangements for the delivery of coherent integrated care are being developed in a large number of countries. Although there are many integrated care programs worldwide, the process of developing these programs and interorganizational collaboration is described in the literature only to a limited extent. The purpose of this study is to explore how local integrated care services are developed in the Netherlands, and to conceptualize and operationalize a development model of integrated care. Methods The research is based on an expert panel study followed by a two-part questionnaire, designed to identify the development process of integrated care. Essential elements of integrated care, which were developed in a previous Delphi and Concept Mapping Study, were analyzed in relation to development process of integrated care. Results Integrated care development can be characterized by four developmental phases: the initiative and design phase; the experimental and execution phase; the expansion and monitoring phase; and the consolidation and transformation phase. Different elements of integrated care have been identified in the various developmental phases. Conclusion The findings provide a descriptive model of the development process that integrated care services can undergo in the Netherlands. The findings have important implications for integrated care services, which can use the model as an instrument to reflect on their current practices. The model can be used to help to identify improvement areas in practice. The model provides a framework for developing evaluation designs for integrated care arrangements. Further research is recommended to test the developed model in practice and to add international experiences. PMID:19261176
Analysis of x-ray hand images for bone age assessment
NASA Astrophysics Data System (ADS)
Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.
1990-09-01
In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.
Compton Reflection in AGN with Simbol-X
NASA Astrophysics Data System (ADS)
Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.
2009-05-01
AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.
[Attentional impairment after traumatic brain injury: assessment and rehabilitation].
Ríos-Lago, M; Muñoz-Céspedes, J M; Paúl-Lapedriza, N
Attention disorders are a major problem after traumatic brain injury underlying deficits in other cognitive functions and in everyday activities, hindering the rehabilitation process and the possibility of return to work. Functional neuroimaging and neuropsychological assessment have depicted theoretical models considering attention as a complex and non-unitary process. Although there are conceptual difficulties, it seems possible to establish a theoretical background to better define attentional impairments and to guide the rehabilitation process. The aim of the present study is to review some of the most important pieces involved in the assessment and rehabilitation of attentional impairments. We also propose an appropriate model for the design of individualized rehabilitation programs. Lastly, different approaches for the rehabilitation are reviewed. Neuropsychological assessment should provide valuable strategies to better design the cognitive rehabilitation programs. It is necessary to establish a link between basic and applied neuropsychology, in order to optimize the treatments for traumatic brain injury patients. It is also emphasized that well-defined cognitive targets and skills are required, given that an unspecific stimulation of cognitive processes (pseudorehabilitation) has been shown to be unsuccessful.
Nutrigenomics at the Interface of Aging, Lifespan, and Cancer Prevention123
Riscuta, Gabriela
2016-01-01
The percentage of elderly people with associated age-related health deterioration, including cancer, has been increasing for decades. Among age-related diseases, the incidence of cancer has grown substantially, in part because of the overlap of some molecular pathways between cancer and aging. Studies with model organisms suggest that aging and age-related conditions are manipulable processes that can be modified by both genetic and environmental factors, including dietary habits. Variations in genetic backgrounds likely lead to differential responses to dietary changes and account for some of the inconsistencies found in the literature. The intricacies of the aging process, coupled with the interrelational role of bioactive food components on gene expression, make this review a complex undertaking. Nevertheless, intriguing evidence suggests that dietary habits can manipulate the aging process and/or its consequences and potentially may have unprecedented health benefits. The present review focuses on 4 cellular events: telomerase activity, bioenergetics, DNA repair, and oxidative stress. These processes are linked to both aging and cancer risk, and their alteration in animal models by selected food components is evident. PMID:27558581
Nutrigenomics at the Interface of Aging, Lifespan, and Cancer Prevention.
Riscuta, Gabriela
2016-10-01
The percentage of elderly people with associated age-related health deterioration, including cancer, has been increasing for decades. Among age-related diseases, the incidence of cancer has grown substantially, in part because of the overlap of some molecular pathways between cancer and aging. Studies with model organisms suggest that aging and age-related conditions are manipulable processes that can be modified by both genetic and environmental factors, including dietary habits. Variations in genetic backgrounds likely lead to differential responses to dietary changes and account for some of the inconsistencies found in the literature. The intricacies of the aging process, coupled with the interrelational role of bioactive food components on gene expression, make this review a complex undertaking. Nevertheless, intriguing evidence suggests that dietary habits can manipulate the aging process and/or its consequences and potentially may have unprecedented health benefits. The present review focuses on 4 cellular events: telomerase activity, bioenergetics, DNA repair, and oxidative stress. These processes are linked to both aging and cancer risk, and their alteration in animal models by selected food components is evident. © 2016 American Society for Nutrition.
Scarinci, Isabel C; Moore, Artisha; Benjamin, Regina; Vickers, Selwyn; Shikany, James; Fouad, Mona
2017-02-01
We describe the formulation and implementation of a participatory evaluation plan for three Transdisciplinary Collaborative Centers for Health Disparities Research funded by the National Institute of Minority Health and Health Disparities. Although different in scope of work, all three centers share a common goal of establishing sustainable centers in health disparities science in three priority areas - social determinants of health, men's health research, and health policy research. The logic model guides the process, impact, and outcome evaluation. Emphasis is placed on process evaluation in order to establish a "blue print" that can guide other efforts as well as assure that activities are being implemented as planned. We have learned three major lessons in this process: (1) Significant engagement, participation, and commitment of all involved is critical for the evaluation process; (2) Having a "roadmap" (logic model) and "directions" (evaluation worksheets) are instrumental in getting members from different backgrounds to follow the same path; and (3) Participation of the evaluator in the leadership and core meetings facilitates continuous feedback. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interagency Collaborative Team Model for Capacity Building to Scale-Up Evidence-Based Practice
Hurlburt, Michael; Aarons, Gregory A; Fettes, Danielle; Willging, Cathleen; Gunderson, Lara; Chaffin, Mark J
2015-01-01
Background System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children’s service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. Methods We describe the ICT model and present preliminary qualitative results from use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Results Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Conclusions Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit. PMID:27512239
Issues and progress in determining background ozone and particle concentrations
NASA Astrophysics Data System (ADS)
Pinto, J. P.
2011-12-01
Exposure to ambient ozone is associated with a variety of health outcomes ranging from mild breathing discomfort to mortality. For the purpose of health risk and policy assessments EPA evaluates the anthropogenic increase in ozone above background concentrations and has defined the North American (NA) background concentration of O3 as that which would occur in the U.S. in the absence of anthropogenic emissions of precursors in the U.S., Canada, and Mexico. Monthly average NA background ozone has been used to evaluate health risks, but EPA and state air quality managers must also estimate day specific ozone background levels for high ozone episodes as part of urban scale photochemical modeling efforts to support ozone regulatory programs. The background concentration of O3 is of more concern than other air pollutants because it typically represents a much larger fraction of observed O3 than do the backgrounds of other criteria pollutants (particulate matter (PM), CO, NO2, SO2). NA background cannot be determined directly from ambient monitoring data because of the influence of NA precursor emissions on formation of ozone within NA. Instead, estimates of NA background O3 have been based on GEOS-Chem using simulations in which NA anthropogenic precursor emissions are zeroed out. Thus, modeled NA background O3 includes contributions from natural sources of precursors (including CH4, NMVOCs, NOx, and CO) everywhere in the world, anthropogenic sources of precursors outside of NA, and downward transport of O3 from the stratosphere. Although monitoring data cannot determine NA background directly, measurements by satellites, aircraft, ozonesondes and surface monitors have proved to be highly useful for identifying sources of background O3 and for evaluating the performance of the GEOS-Chem model. Model simulated NA background concentrations are strong functions of location and season with large inter-day variability and with values increasing with elevation and higher in spring than in summer, and tend to be highest in the Intermountain West during spring. Estimates of annual average NA and other background definitions that have been considered will be presented. Issues associated with modeling background concentrations for both health-risk assessments and for episodic regulatory air quality programs will be discussed, and proposals for new atmospheric measurements and model improvements needed to quantify more accurately background contributions to ozone will also be presented. The views expressed are those of the author and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.
Novikov, V E; Ponamareva, N S
2007-01-01
The hydration (content of total, bound, and free water) and the activity of lipid peroxidation (LPO) processes in the brain have been studied in rats on the background of traumatic brain injury (TBI) dynamics. It is established that aminothiol-based anthihypoxants such as bemithyl and amthizol in a dose of 25 mg/kg alleviate changes induced by TBI. In particular, the drugs decrease the content of total and free water, increase the level of bound water, and inhibit the LPO intensity in the brain. The effect of drugs is more pronounced on the 4th and 7th day after TBI model induction.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
The Development of a Qualitative Dynamic Attribute Value Model for Healthcare Institutes
Lee, Wan-I
2010-01-01
Background: Understanding customers has become an urgent topic for increasing competitiveness. The purpopse of the study was to develop a qualitative dynamic attribute value model which provides insight into the customers’ value for healthcare institute managers by conducting the initial open-ended questionnaire survey to select participants purposefully. Methods: A total number of 427 questionnaires was conducted in two hospitals in Taiwan (one district hospital with 635 beds and one academic hospital with 2495 beds) and 419 questionnaires were received in nine weeks. Then, apply qualitative in-depth interviews to explore customers’ perspective of values for building a model of partial differential equations. Results: This study concludes nine categories of value, including cost, equipment, physician background, physicain care, environment, timing arrangement, relationship, brand image and additional value, to construct objective network for customer value and qualitative dynamic attribute value model where the network shows the value process of loyalty development via its effect on customer satisfaction, customer relationship, customer loyalty and healthcare service. Conclusion: One set predicts the customer relationship based on comminent, including service quality, communication and empahty. As the same time, customer loyalty based on trust, involves buzz marketing, brand and image. Customer value of the current instance is useful for traversing original customer attributes and identifing customers on different service share. PMID:23113034
Model independent inference of the expansion history and implications for the growth of structure
NASA Astrophysics Data System (ADS)
Joudaki, Shahab; Kaplinghat, Manoj; Keeley, Ryan; Kirkby, David
2018-06-01
We model the expansion history of the Universe as a Gaussian process and find constraints on the dark energy density and its low-redshift evolution using distances inferred from the Luminous Red Galaxy and Lyman-alpha data sets of the Baryon Oscillation Spectroscopic Survey, supernova data from the Joint Light-Curve Analysis sample, cosmic microwave background data from the Planck satellite, and local measurement of the Hubble parameter from the Hubble Space Telescope (H 0 ). Our analysis shows that the cosmic microwave background, Luminous Red Galaxy, Lyman-alpha, and Joint Light-Curve Analysis data are consistent with each other and with a Λ CDM cosmology, but the H 0 data are inconsistent at moderate significance. Including the presence of dark radiation does not alleviate the H 0 tension in our analysis. While some of these results have been noted previously, the strength here lies in that we do not assume a particular cosmological model. We calculate the growth of the gravitational potential in General Relativity corresponding to these general expansion histories and show that they are well approximated by Ωm0.55 given the current precision. We assess the prospects for upcoming surveys to measure deviations from Λ CDM using this model-independent approach.
Behavior analysis of video object in complicated background
NASA Astrophysics Data System (ADS)
Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang
2016-10-01
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
NASA Astrophysics Data System (ADS)
Liu, L.; Huang, Q.; Wang, Y.
2012-12-01
The variations in the strength and frequency shift of the Schumann resonance (SR) of the electromagnetic (EM) field prior to some significance earthquakes were reported by a number of researchers. As a robust physical phenomenon constantly exists in the resonant cavity formed by the lithosphere-atmosphere-ionosphere system, irregular variations in SR parameters can be naturally attributed to be the potential precursory observables for forecasting earthquake occurrences. Schumann resonance (SR) of the EM field between the lithosphere and the ionosphere occurs because the space between the surface of the Earth and the conductive ionosphere acts as a closed waveguide. The cavity is naturally excited by electric currents generated by lightning. SR is the principal background in the electromagnetic spectrum at extremely low frequencies (ELF) between 3-69 Hz. We simulated the EM field in the lithosphere-ionosphere waveguide with a 2-dimensional (2D), cylindrical whole-earth model by the hybrid pseudo-spectral and finite difference time domain method. Considering the seismogensis as a fully coupled seismoelectric process, we simulate the seismic wave and EM wave in this 2D model. The excitation of SR in the background EM field are generated by the electric-current impulses due to lightning thunderstorms within the lowest 10 kilometers of the atmosphere . The diurnal variation and the latitude-dependence in ion concentration in the ionosphere are included in the model. After the SR has reached the steady state, the impulse generated by the seismogenic process (pre-, co- and post-seismic) in the crust is introduced to assess the possible precursory effects on SR strength and frequency. The modeling results explain the observed fact of why SR has a much more sensitive response to continental earthquakes, and much less response to oceanic events; the reason is simply due to the shielding effect of the conductive ocean that prevents effective radiation of the seismoelectric signals into the lithosphere- ionosphere waveguide.; Resonance cavity model formed by the lithosphere-atmosphere-ionosphere system (illustrative, not to the scale of the Earth).
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Teyssier, Romain
2010-11-01
We present a set of cosmological simulations with radiative transfer in order to model the reionization history of the universe from z = 18 down to z = 6. Galaxy formation and the associated star formation are followed self-consistently with gas and dark matter dynamics using the RAMSES code, while radiative transfer is performed as a post-processing step using a moment-based method with the M1 closure relation in the ATON code. The latter has been ported to a multiple Graphics Processing Unit (GPU) architecture using the CUDA language together with the MPI library, resulting in an overall acceleration that allows us to tackle radiative transfer problems at a significantly higher resolution than previously reported: 10243 + 2 levels of refinement for the hydrodynamic adaptive grid and 10243 for the radiative transfer Cartesian grid. We reach a typical acceleration factor close to 100× when compared to the CPU version, allowing us to perform 1/4 million time steps in less than 3000 GPU hr. We observe good convergence properties between our different resolution runs for various volume- and mass-averaged quantities such as neutral fraction, UV background, and Thomson optical depth, as long as the effects of finite resolution on the star formation history are properly taken into account. We also show that the neutral fraction depends on the total mass density, in a way close to the predictions of photoionization equilibrium, as long as the effect of self-shielding are included in the background radiation model. Although our simulation suite has reached unprecedented mass and spatial resolution, we still fail in reproducing the z ~ 6 constraints on the neutral fraction of hydrogen and the intensity of the UV background. In order to account for unresolved density fluctuations, we have modified our chemistry solver with a simple clumping factor model. Using our most spatially resolved simulation (12.5 Mpc h -1 with 10243 particles) to calibrate our subgrid model, we have resimulated our largest box (100 Mpc h -1 with 10243 particles) with the modified chemistry, successfully reproducing the observed level of neutral hydrogen in the spectra of high-redshift quasars. We however did not reproduce the average photoionization rate inferred from the same observations. We argue that this discrepancy could be partly explained by the fact that the average radiation intensity and the average neutral fraction depend on different regions of the gas density distribution, so that one quantity cannot be simply deduced from the other.
NASA Astrophysics Data System (ADS)
Pinty, Bernard; Andredakis, Ioannis; Clerici, Marco; Kaminski, Thomas; Taberner, Malcolm; Stephen, Plummer
2011-01-01
We present results from the application of an inversion method conducted using MODIS derived broadband visible and near-infrared surface albedo products. This contribution is an extension of earlier efforts to optimally retrieve land surface fluxes and associated two- stream model parameters based on the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The discussion focuses on products (based on the mean and one-sigma values of the Probability Distribution Functions (PDFs)) obtained during the summer and winter and highlight specific issues related to snowy conditions. This paper discusses the retrieved model parameters including the effective Leaf Area Index (LAI), the background brightness and the scattering efficiency of the vegetation elements. The spatial and seasonal changes exhibited by these parameters agree with common knowledge and underscore the richness of the high quality surface albedo data sets. At the same time, the opportunity to generate global maps of new products, such as the background albedo, underscores the advantages of using state of the art algorithmic approaches capable of fully exploiting accurate satellite remote sensing datasets. The detailed analyses of the retrieval uncertainties highlight the central role and contribution of the LAI, the main process parameter to interpret radiation transfer observations over vegetated surfaces. The posterior covariance matrix of the uncertainties is further exploited to quantify the knowledge gain from the ingestion of MODIS surface albedo products. The estimation of the radiation fluxes that are absorbed, transmitted and scattered by the vegetation layer and its background is achieved on the basis of the retrieved PDFs of the model parameters. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation fluxes via the model’s Jacobian matrix of first derivatives. A definite asset of the JRC-TIP lies in its capability to control and ultimately relax a number of assumptions that are often implicit in traditional approaches. These features greatly help understand the discrepancies between the different data sets of land surface properties and fluxes that are currently available. Through a series of selected examples, the inverse procedure implemented in the JRC-TIP is shown to be robust, reliable and compliant with large scale processing requirements. Furthermore, this package ensures the physical consistency between the set of observations, the two-stream model parameters and radiation fluxes. It also documents the retrieval of associated uncertainties. The knowledge gained from the availability of remote sensing surface albedo products can be expressed in quantitative terms using a simple metric. This metric helps identify the geographical locations and periods of the year where the remote sensing products fail in reducing the uncertainty on the process model parameters as can be specified from current knowledge.
Investigating Galactic Structure with COBE/DIRBE and Simulation
NASA Technical Reports Server (NTRS)
Cohen, Martin
1999-01-01
In this work I applied the current version of the SKY model of the point source sky to the interpretation of the diffuse all-sky emission observed by COBE/DIRBE (Cosmic Background Explorer Satellite/Diffuse Infrared Background Experiment). The goal was to refine the SKY model using the all-sky DIRBE maps of the Galaxy, in order that a search could be made for an isotropic cosmic background."Faint Source Model" [FSM] was constructed to remove Galactic fore ground stars from the ZSMA products. The FSM mimics SKY version 1 but it was inadequate to seek cosmic background emission because of the sizeable residual emission in the ZSMA products after this starlight subtraction. At this point I can only support that such models are currently inadequate to reveal a cosmic background. Even SKY5 yields the same disappointing result.
Time distributions of solar energetic particle events: Are SEPEs really random?
NASA Astrophysics Data System (ADS)
Jiggens, P. T. A.; Gabriel, S. B.
2009-10-01
Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.
Hand motion segmentation against skin colour background in breast awareness applications.
Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick
2004-01-01
Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.
von Stosch, Moritz; Davy, Steven; Francois, Kjell; Galvanauskas, Vytautas; Hamelink, Jan-Martijn; Luebbert, Andreas; Mayer, Martin; Oliveira, Rui; O'Kennedy, Ronan; Rice, Paul; Glassey, Jarka
2014-06-01
This report highlights the drivers, challenges, and enablers of the hybrid modeling applications in biopharmaceutical industry. It is a summary of an expert panel discussion of European academics and industrialists with relevant scientific and engineering backgrounds. Hybrid modeling is viewed in its broader sense, namely as the integration of different knowledge sources in form of parametric and nonparametric models into a hybrid semi-parametric model, for instance the integration of fundamental and data-driven models. A brief description of the current state-of-the-art and industrial uptake of the methodology is provided. The report concludes with a number of recommendations to facilitate further developments and a wider industrial application of this modeling approach. These recommendations are limited to further exploiting the benefits of this methodology within process analytical technology (PAT) applications in biopharmaceutical industry. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mesoscopic Model — Advanced Simulation of Microforming Processes
NASA Astrophysics Data System (ADS)
Geißdörfer, Stefan; Engel, Ulf; Geiger, Manfred
2007-04-01
Continued miniaturization in many fields of forming technology implies the need for a better understanding of the effects occurring while scaling down from conventional macroscopic scale to microscale. At microscale, the material can no longer be regarded as a homogeneous continuum because of the presence of only a few grains in the deformation zone. This leads to a change in the material behaviour resulting among others in a large scatter of forming results. A correlation between the integral flow stress of the workpiece and the scatter of the process factors on the one hand and the mean grain size and its standard deviation on the other hand has been observed in experiments. The conventional FE-simulation of scaled down processes is not able to consider the size-effects observed such as the actual reduction of the flow stress, the increasing scatter of the process factors and a local material flow being different to that obtained in the case of macroparts. For that reason, a new simulation model has been developed taking into account all the size-effects. The present paper deals with the theoretical background of the new mesoscopic model, its characteristics like synthetic grain structure generation and the calculation of micro material properties — based on conventional material properties. The verification of the simulation model is done by carrying out various experiments with different mean grain sizes and grain structures but the same geometrical dimensions of the workpiece.
Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)
1996-01-01
This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.
Global modeling of wall material migration following boronization in NSTX-U
NASA Astrophysics Data System (ADS)
Nichols, J. H.; Jaworski, M. A.; Skinner, C. H.; Bedoya, F.; Scotti, F.; Soukhanovskii, V. A.; Schmid, K.
2017-10-01
NSTX-U operated in 2016 with graphite plasma facing components, periodically conditioned with boron to improve plasma performance. Following each boronization, spectroscopic diagnostics generally observed a decrease in oxygen influx from the walls, and an in-vacuo material probe (MAPP) observed a corresponding decrease in surface oxygen concentration at the lower divertor. However, oxygen levels tended to return to a pre-boronization state following repeated plasma exposure. This behavior is interpretively modeled using the WallDYN mixed-material migration code, which couples local erosion and deposition processes with plasma impurity transport in a non-iterative, self-consistent manner that maintains overall material balance. A spatially inhomogenous model of the thin films produced by the boronization process is presented. Plasma backgrounds representative of NSTX-U conditions are reconstructed from a combination of NSTX-U and NSTX datasets. Low-power NSTX-U fiducial discharges, which led to less apparent surface degradation than normal operations, are also modeled with WallDYN. Likely mechanisms driving the observed evolution of surface oxygen are examined, as well as remaining discrepancies between model and experiment and potential improvements to the model. Work supported by US DOE contract DE-AC02-09CH11466.
Three-dimensional laser radar modeling
NASA Astrophysics Data System (ADS)
Steinvall, Ove K.; Carlsson, Tomas
2001-09-01
Laser radars have the unique capability to give intensity and full 3-D images of an object. Doppler lidars can give velocity and vibration characteristics of an objects. These systems have many civilian and military applications such as terrain modelling, depth sounding, object detection and classification as well as object positioning. In order to derive the signal waveform from the object one has to account for the laser pulse time characteristics, media effects such as the atmospheric attenuation and turbulence effects or scattering properties, the target shape and reflection (BRDF), speckle noise together with the receiver and background noise. Finally the type of waveform processing (peak detection, leading edge etc.) is needed to model the sensor output to be compared with observations. We have developed a computer model which models performance of a 3-D laser radar. We will give examples of signal waveforms generated from model different targets calculated by integrating the laser beam profile in space and time over the target including reflection characteristics during different speckle and turbulence conditions. The result will be of help when designing and using new laser radar systems. The importance of different type of signal processing of the waveform in order to fulfil performance goals will be shown.
NASA Astrophysics Data System (ADS)
Beretta, Giordano
2007-01-01
The words in a document are often supported, illustrated, and enriched by visuals. When color is used, some of it is used to define the document's identity and is therefore strictly controlled in the design process. The result of this design process is a "color specification sheet," which must be created for every background color. While in traditional publishing there are only a few backgrounds, in variable data publishing a larger number of backgrounds can be used. We present an algorithm that nudges the colors in a visual to be distinct from a background while preserving the visual's general color character.
The background in the experiment Gerda
NASA Astrophysics Data System (ADS)
Agostini, M.; Allardt, M.; Andreotti, E.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barnabé Heider, M.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; Cossavella, F.; Demidova, E. V.; Domula, A.; Egorov, V.; Falkenstein, R.; Ferella, A.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gotti, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Guthikonda, K. K.; Hampel, W.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Ioannucci, L.; Csáthy, J. Janicskó; Jochum, J.; Junker, M.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Liu, X.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Machado, A. A.; Majorovits, B.; Maneschg, W.; Nemchenok, I.; Nisi, S.; O'Shaughnessy, C.; Palioselitis, D.; Pandola, L.; Pelczar, K.; Pessina, G.; Pullia, A.; Riboldi, S.; Sada, C.; Salathe, M.; Schmitt, C.; Schreiner, J.; Schulz, O.; Schwingenheuer, B.; Schönert, S.; Shevchik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Strecker, H.; Tarka, M.; Ur, C. A.; Vasenko, A. A.; Volynets, O.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.
2014-04-01
The GERmanium Detector Array ( Gerda) experiment at the Gran Sasso underground laboratory (LNGS) of INFN is searching for neutrinoless double beta () decay of Ge. The signature of the signal is a monoenergetic peak at 2039 keV, the value of the decay. To avoid bias in the signal search, the present analysis does not consider all those events, that fall in a 40 keV wide region centered around . The main parameters needed for the analysis are described. A background model was developed to describe the observed energy spectrum. The model contains several contributions, that are expected on the basis of material screening or that are established by the observation of characteristic structures in the energy spectrum. The model predicts a flat energy spectrum for the blinding window around with a background index ranging from 17.6 to 23.8 cts/(keV kg yr). A part of the data not considered before has been used to test if the predictions of the background model are consistent. The observed number of events in this energy region is consistent with the background model. The background at is dominated by close sources, mainly due to K, Bi, Th, Co and emitting isotopes from the Ra decay chain. The individual fractions depend on the assumed locations of the contaminants. It is shown, that after removal of the known peaks, the energy spectrum can be fitted in an energy range of 200 keV around with a constant background. This gives a background index consistent with the full model and uncertainties of the same size.
Dual processing model of medical decision-making
2012-01-01
Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories). PMID:22943520
Shih, Justin; Fanyin-Martin, Ato; Taher, Edris; Chandran, Kartik
2017-01-01
Background. In Ghana, faecal sludge (FS) from on-site sanitation facilities is often discharged untreated into the environment, leading to significant insults to environmental and human health. Anaerobic digestion offers an attractive pathway for FS treatment with the concomitant production of energy in the form of methane. Another innovative option includes separating digestion into acidogenesis (production of volatile fatty acids (VFA)) and methanogenesis (production of methane), which could ultimately facilitate the production of an array of biofuels and biochemicals from the VFA. This work describes the development, implementation and modeling based analysis of a novel multiphase anaerobic fermentation-digestion process aimed at FS treatment in Kumasi, Ghana. Methods. A pilot-scale anaerobic fermentation process was implemented at the Kumasi Metropolitan Assembly’s Oti Sanitary Landfill Site at Adanse Dompoase. The process consisted of six 10 m reactors in series, which were inoculated with bovine rumen and fed with fecal sludge obtained from public toilets. The performance of the fermentation process was characterized in terms of both aqueous and gaseous variables representing the conversion of influent organic carbon to VFA as well as CH 4. Using the operating data, the first-ever process model for FS fermentation and digestion was developed and calibrated, based on the activated sludge model framework. Results and Conclusions. This work represents one of the first systematic efforts at integrated FS characterization and process modeling to enable anaerobic fermentation and digestion of FS. It is shown that owing to pre-fermentation of FS in public septage holding tanks, one could employ significantly smaller digesters (lower capital costs) or increased loading capabilities for FS conversion to biogas or VFA. Further, using the first-ever calibrated process model for FS fermentation and digestion presented herein, we expect improved and more mechanistically informed development and application of different process designs and configurations for global FS management practice. PMID:29528044
Shih, Justin; Fanyin-Martin, Ato; Taher, Edris; Chandran, Kartik
2017-11-06
Background. In Ghana, faecal sludge (FS) from on-site sanitation facilities is often discharged untreated into the environment, leading to significant insults to environmental and human health. Anaerobic digestion offers an attractive pathway for FS treatment with the concomitant production of energy in the form of methane. Another innovative option includes separating digestion into acidogenesis (production of volatile fatty acids (VFA)) and methanogenesis (production of methane), which could ultimately facilitate the production of an array of biofuels and biochemicals from the VFA. This work describes the development, implementation and modeling based analysis of a novel multiphase anaerobic fermentation-digestion process aimed at FS treatment in Kumasi, Ghana. Methods. A pilot-scale anaerobic fermentation process was implemented at the Kumasi Metropolitan Assembly's Oti Sanitary Landfill Site at Adanse Dompoase. The process consisted of six 10 m reactors in series, which were inoculated with bovine rumen and fed with fecal sludge obtained from public toilets. The performance of the fermentation process was characterized in terms of both aqueous and gaseous variables representing the conversion of influent organic carbon to VFA as well as CH 4 . Using the operating data, the first-ever process model for FS fermentation and digestion was developed and calibrated, based on the activated sludge model framework. Results and Conclusions. This work represents one of the first systematic efforts at integrated FS characterization and process modeling to enable anaerobic fermentation and digestion of FS. It is shown that owing to pre-fermentation of FS in public septage holding tanks, one could employ significantly smaller digesters (lower capital costs) or increased loading capabilities for FS conversion to biogas or VFA. Further, using the first-ever calibrated process model for FS fermentation and digestion presented herein, we expect improved and more mechanistically informed development and application of different process designs and configurations for global FS management practice.
Matthies, M
2003-04-11
For the prevention of future damages from chemicals at large contaminated sites, all transfer pathways leading to the exposure of man and vulnerable ecosystems have to be taken into account. For organic contaminants, the uptake into vegetation is the major entry route for the transfer into the food chains. Lipophilic substances are taken up by roots but are not translocated with the transpiration stream. Atmospheric background concentrations have a significant impact on foliage contamination due to the effective gaseous and particle deposition. Vegetables can also be contaminated after irrigation with contaminated water supplied by groundwater wells. By means of a multicompartment model, the various uptake processes into roots and foliage as well as the transformation and translocation processes are described and the concentration pattern resulting from daily irrigation with methyl-t-butyl ether in the edible parts is simulated. The results demonstrate the advantage of a dynamic multicompartment model over the static environmental quality standard approach in terms of derivation of possible exposure reduction measures for organic chemicals.
Mean-variance model for portfolio optimization with background risk based on uncertainty theory
NASA Astrophysics Data System (ADS)
Zhai, Jia; Bai, Manying
2018-04-01
The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.
Factors influencing the spatial extent of mobile source air pollution impacts: a meta-analysis
Zhou, Ying; Levy, Jonathan I
2007-01-01
Background There has been growing interest among exposure assessors, epidemiologists, and policymakers in the concept of "hot spots", or more broadly, the "spatial extent" of impacts from traffic-related air pollutants. This review attempts to quantitatively synthesize findings about the spatial extent under various circumstances. Methods We include both the peer-reviewed literature and government reports, and focus on four significant air pollutants: carbon monoxide, benzene, nitrogen oxides, and particulate matter (including both ultrafine particle counts and fine particle mass). From the identified studies, we extracted information about significant factors that would be hypothesized to influence the spatial extent within the study, such as the study type (e.g., monitoring, air dispersion modeling, GIS-based epidemiological studies), focus on concentrations or health risks, pollutant under study, background concentration, emission rate, and meteorological factors, as well as the study's implicit or explicit definition of spatial extent. We supplement this meta-analysis with results from some illustrative atmospheric dispersion modeling. Results We found that pollutant characteristics and background concentrations best explained variability in previously published spatial extent estimates, with a modifying influence of local meteorology, once some extreme values based on health risk estimates were removed from the analysis. As hypothesized, inert pollutants with high background concentrations had the largest spatial extent (often demonstrating no significant gradient), and pollutants formed in near-source chemical reactions (e.g., nitrogen dioxide) had a larger spatial extent than pollutants depleted in near-source chemical reactions or removed through coagulation processes (e.g., nitrogen oxide and ultrafine particles). Our illustrative dispersion model illustrated the complex interplay of spatial extent definitions, emission rates, background concentrations, and meteorological conditions on spatial extent estimates even for non-reactive pollutants. Our findings indicate that, provided that a health risk threshold is not imposed, the spatial extent of impact for mobile sources reviewed in this study is on the order of 100–400 m for elemental carbon or particulate matter mass concentration (excluding background concentration), 200–500 m for nitrogen dioxide and 100–300 m for ultrafine particle counts. Conclusion First, to allow for meaningful comparisons across studies, it is important to state the definition of spatial extent explicitly, including the comparison method, threshold values, and whether background concentration is included. Second, the observation that the spatial extent is generally within a few hundred meters for highway or city roads demonstrates the need for high resolution modeling near the source. Finally, our findings emphasize that policymakers should be able to develop reasonable estimates of the "zone of influence" of mobile sources, provided that they can clarify the pollutant of concern, the general site characteristics, and the underlying definition of spatial extent that they wish to utilize. PMID:17519039
CERN IT Book Fair 2009 - Special talk by Bjarne Stroustrup: "The Design of C++0x"
Stroustrup, Bjarne
2018-05-24
A draft for a revised ISO C++ standard, C++0x, has been produced. The speaker will present the background of C++, its aims, the standards process (with opinions), some of the guiding design principles (with tiny code examples), and two case studies.The case studies are initialization (a general and uniform syntax and semantics for initializers in all contexts) and concurrent support facilities (memory model, threads, locks, futures).
A Markov Decision Process Model for the Optimal Dispatch of Military Medical Evacuation Assets
2014-03-27
further background on MEDEVAC and provides a review of pertinent literature . Section 3 provides a de- scription of the problem for which we develop our...best medical evacuation system possible, for those who follow in your footsteps . Special thanks goes to my wife and two children for their...order to generate the computational results necessary to make this paper a success. Lastly, I would like to thank the US Army Medical Evacuation
Infrared monitoring of the Space Station environment
NASA Technical Reports Server (NTRS)
Kostiuk, Theodor; Jennings, Donald E.; Mumma, Michael J.
1988-01-01
The measurement and monitoring of infrared emission in the environment of the Space Station has a twofold importance - for the study of the phenomena itself and as an aid in planning and interpreting Station based infrared experiments. Spectral measurements of the infrared component of the spacecraft glow will, along with measurements in other spectral regions, provide data necessary to fully understand and model the physical and chemical processes producing these emissions. The monitoring of the intensity of these emissions will provide background limits for Space Station based infrared experiments and permit the determination of optimum instrument placement and pointing direction. Continuous monitoring of temporal changes in the background radiation (glow) will also permit better interpretation of Station-based infrared earth sensing and astronomical observations. The primary processes producing infrared emissions in the Space Station environment are: (1) Gas phase excitations of Station generated molecules ( e.g., CO2, H2O, organics...) by collisions with the ambient flux of mainly O and N2. Molecular excitations and generation of new species by collisions of ambient molecules with Station surfaces. They provide a list of resulting species, transition energies, excitation cross sections and relevant time constants. The modeled spectrum of the excited species occurs primarily at wavelengths shorter than 8 micrometer. Emissions at longer wavelengths may become important during rocket firing or in the presence of dust.