Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Phenomenology of wall-bounded Newtonian turbulence.
L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S
2006-01-01
We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.
A simple rule of thumb for elegant prehension.
Mon-Williams, M; Tresilian, J R
2001-07-10
Reaching out to grasp an object (prehension) is a deceptively elegant and skilled behavior. The movement prior to object contact can be described as having two components, the movement of the hand to an appropriate location for gripping the object, the "transport" component, and the opening and closing of the aperture between the fingers as they prepare to grip the target, the "grasp" component. The grasp component is sensitive to the size of the object, so that a larger grasp aperture is formed for wider objects; the maximum grasp aperture (MGA) is a little wider than the width of the target object and occurs later in the movement for larger objects. We present a simple model that can account for the temporal relationship between the transport and grasp components. We report the results of an experiment providing empirical support for our "rule of thumb." The model provides a simple, but plausible, account of a neural control strategy that has been the center of debate over the last two decades.
MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.
Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R
2009-09-01
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.
Division of Attention Relative to Response Between Attended and Unattended Stimuli.
ERIC Educational Resources Information Center
Kantowitz, Barry H.
Research was conducted to investigate two general classes of human attention models, early-selection models which claim that attentional selecting precedes memory and meaning extraction mechanisms, and late-selection models which posit the reverse. This research involved two components: (1) the development of simple, efficient, computer-oriented…
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance
NASA Technical Reports Server (NTRS)
Woolley, Ryan C.
2014-01-01
The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
Calibration of a simple and a complex model of global marine biogeochemistry
NASA Astrophysics Data System (ADS)
Kriest, Iris
2017-11-01
The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.
1979-01-01
The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1991-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.
A reevaluation of the infrared-radio correlation for spiral galaxies
NASA Technical Reports Server (NTRS)
Devereux, Nicholas A.; Eales, Stephen A.
1989-01-01
The infrared radio correlation has been reexamined for a sample of 237 optically bright spiral galaxies which range from 10 to the 8th to 10 to the 11th solar luminosities in far-infrared luminosity. The slope of the correlation is not unity. A simple model in which dust heating by both star formation and the interstellar radiation field contribute to the far-infrared luminosity can account for the nonunity slope. The model differs from previous two component models, however, in that the relative contribution of the two components is independent of far-infrared color temperature, but is dependent on the far-infrared luminosity.
Feedback loops and temporal misalignment in component-based hydrologic modeling
NASA Astrophysics Data System (ADS)
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
The Simple View of Reading: Assessment and Intervention
ERIC Educational Resources Information Center
Roberts, Jenny A.; Scott, Kathleen A.
2006-01-01
The Simple View of Reading (P. B. Gough & W. Tunmer, 1986; W. A. Hoover & P. B. Gough, 1990) provides a 2-component model of reading. Each of these 2 components, decoding and comprehension, is necessary for normal reading to occur. The Simple View of Reading provides a relatively transparent model that can be used by professionals not only to…
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.
Numerical modelling of strain in lava tubes
NASA Astrophysics Data System (ADS)
Merle, Olivier
The strain within lava tubes is described in terms of pipe flow. Strain is partitioned into three components: (a) two simple shear components acting from top to bottom and from side to side of a rectangular tube in transverse section; and (b) a pure shear component corresponding to vertical shortening in a deflating flow and horizontal compression in an inflating flow. The sense of shear of the two simple shear components is reversed on either side of a central zone of no shear. Results of numerical simulations of strain within lava tubes reveal a concentric pattern of flattening planes in section normal to the flow direction. The central node is a zone of low strain, which increases toward the lateral borders. Sections parallel to the flow show obliquity of the flattening plane to the flow axis, constituting an imbrication. The strain ellipsoid is generally of plane strain type, but can be of constriction or flattening type if thinning (i.e. deflating flow) or thickening (i.e. inflating flow) is superimposed on the simple shear regime. The strain pattern obtained from numerical simulation is then compared with several patterns recently described in natural lava flows. It is shown that the strain pattern revealed by AMS studies or crystal preferred orientations is remarkably similar to the numerical simulation. However, some departure from the model is found in AMS measurements. This may indicate inherited strain recorded during early stages of the flow or some limitation of the AMS technique.
A computer model of context-dependent perception in a very simple world
NASA Astrophysics Data System (ADS)
Lara-Dammer, Francisco; Hofstadter, Douglas R.; Goldstone, Robert L.
2017-11-01
We propose the foundations of a computer model of scientific discovery that takes into account certain psychological aspects of human observation of the world. To this end, we simulate two main components of such a system. The first is a dynamic microworld in which physical events take place, and the second is an observer that visually perceives entities and events in the microworld. For reason of space, this paper focuses only on the starting phase of discovery, which is the relatively simple visual inputs of objects and collisions.
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Simple waves in a two-component Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Ivanov, S. K.; Kamchatnov, A. M.
2018-04-01
We study the dynamics of so-called simple waves in a two-component Bose-Einstein condensate. The evolution of the condensate is described by Gross-Pitaevskii equations which can be reduced for these simple wave solutions to a system of ordinary differential equations which coincide with those derived by Ovsyannikov for the two-layer fluid dynamics. We solve the Ovsyannikov system for two typical situations of large and small difference between interspecies and intraspecies nonlinear interaction constants. Our analytic results are confirmed by numerical simulations.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
Testing the uniqueness of mass models using gravitational lensing
NASA Astrophysics Data System (ADS)
Walls, Levi; Williams, Liliya L. R.
2018-06-01
The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.
Definitions: Health, Fitness, and Physical Activity.
ERIC Educational Resources Information Center
Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don
2000-01-01
This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…
ERIC Educational Resources Information Center
Savage, Robert; Burgos, Giovani; Wood, Eileen; Piquette, Noella
2015-01-01
The Simple View of Reading (SVR) describes Reading Comprehension as the product of distinct child-level variance in decoding (D) and linguistic comprehension (LC) component abilities. When used as a model for educational policy, distinct classroom-level influences of each of the components of the SVR model have been assumed, but have not yet been…
Light-meson masses in an unquenched quark model
NASA Astrophysics Data System (ADS)
Chen, Xiaoyun; Ping, Jialun; Roberts, Craig D.; Segovia, Jorge
2018-05-01
We perform a coupled-channels calculation of the masses of light mesons with the quantum numbers I JP =-, (I ,J )=0 , 1, by including q q ¯ and (q q ¯)2 components in a nonrelativistic chiral quark model. The coupling between two- and four-quark configurations is realized through a 3P0 quark-pair creation model. With the usual form of this operator, the mass shifts are large and negative, an outcome which raises serious issues of validity for the quenched quark model. Herein, therefore, we introduce some improvements of the 3P0 operator in order to reduce the size of the mass shifts. By introducing two simple factors, physically well motivated, the coupling between q q ¯ and (q q ¯)2 components is weakened, producing mass shifts that are around 10%-20% of hadron bare masses.
Statistical mechanics of homogeneous partly pinned fluid systems.
Krakoviack, Vincent
2010-12-01
The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.
Calculating the surface tension of binary solutions of simple fluids of comparable size
NASA Astrophysics Data System (ADS)
Zaitseva, E. S.; Tovbin, Yu. K.
2017-11-01
A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.
Circadian Effects on Simple Components of Complex Task Performance
NASA Technical Reports Server (NTRS)
Clegg, Benjamin A.; Wickens, Christopher D.; Vieane, Alex Z.; Gutzwiller, Robert S.; Sebok, Angelia L.
2015-01-01
The goal of this study was to advance understanding and prediction of the impact of circadian rhythm on aspects of complex task performance during unexpected automation failures, and subsequent fault management. Participants trained on two tasks: a process control simulation, featuring automated support; and a multi-tasking platform. Participants then completed one task in a very early morning (circadian night) session, and the other during a late afternoon (circadian day) session. Small effects of time of day were seen on simple components of task performance, but impacts on more demanding components, such as those that occur following an automation failure, were muted relative to previous studies where circadian rhythm was compounded with sleep deprivation and fatigue. Circadian low participants engaged in compensatory strategies, rather than passively monitoring the automation. The findings and implications are discussed in the context of a model that includes the effects of sleep and fatigue factors.
Microburst vertical wind estimation from horizontal wind measurements
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1994-01-01
The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.
Light-meson masses in an unquenched quark model
Chen, Xiaoyun; Ping, Jialun; Roberts, Craig D.; ...
2018-05-17
We perform a coupled-channels calculation of the masses of light mesons with the quantum numbers IJ P=-, (I,J) = 0,1, by includingmore » $$q\\bar{q}$$ and ($$q\\bar{q}$$) 2 components in a nonrelativistic chiral quark model. The coupling between two- and four-quark configurations is realized through a 3P 0 quark-pair creation model. With the usual form of this operator, the mass shifts are large and negative, an outcome which raises serious issues of validity for the quenched quark model. Therefore, we introduce some improvements of the 3P 0 operator in order to reduce the size of the mass shifts. By introducing two simple factors, physically well motivated, the coupling between $$q\\bar{q}$$ and ($$q\\bar{q}$$) 2 components is weakened, producing mass shifts that are around 10%–20% of hadron bare masses.« less
Light-meson masses in an unquenched quark model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaoyun; Ping, Jialun; Roberts, Craig D.
We perform a coupled-channels calculation of the masses of light mesons with the quantum numbers IJ P=-, (I,J) = 0,1, by includingmore » $$q\\bar{q}$$ and ($$q\\bar{q}$$) 2 components in a nonrelativistic chiral quark model. The coupling between two- and four-quark configurations is realized through a 3P 0 quark-pair creation model. With the usual form of this operator, the mass shifts are large and negative, an outcome which raises serious issues of validity for the quenched quark model. Therefore, we introduce some improvements of the 3P 0 operator in order to reduce the size of the mass shifts. By introducing two simple factors, physically well motivated, the coupling between $$q\\bar{q}$$ and ($$q\\bar{q}$$) 2 components is weakened, producing mass shifts that are around 10%–20% of hadron bare masses.« less
Fast trimers in a one-dimensional extended Fermi-Hubbard model
NASA Astrophysics Data System (ADS)
Dhar, A.; Törmä, P.; Kinnunen, J. J.
2018-04-01
We consider a one-dimensional two-component extended Fermi-Hubbard model with nearest-neighbor interactions and mass imbalance between the two species. We study the binding energy of trimers, various observables for detecting them, and expansion dynamics. We generalize the definition of the trimer gap to include the formation of different types of clusters originating from nearest-neighbor interactions. Expansion dynamics reveal rapidly propagating trimers, with speeds exceeding doublon propagation in the strongly interacting regime. We present a simple model for understanding this unique feature of the movement of the trimers, and we discuss the potential for experimental realization.
ERIC Educational Resources Information Center
Wong, Yu Ka
2017-01-01
Based on the Simple View of Reading model, this study examines the relationships among Chinese reading comprehension and its two componential processes, Chinese character reading and listening comprehension, in young learners of Chinese as a second language (CSL) using a longitudinal design. Using relevant measures, a sample of 142 senior primary…
Active-to-absorbing-state phase transition in an evolving population with mutation.
Sarkar, Niladri
2015-10-01
We study the active to absorbing phase transition (AAPT) in a simple two-component model system for a species and its mutant. We uncover the nontrivial critical scaling behavior and weak dynamic scaling near the AAPT that shows the significance of mutation and highlights the connection of this model with the well-known directed percolation universality class. Our model should be a useful starting point to study how mutation may affect extinction or survival of a species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott Stewart, D., E-mail: dss@illinois.edu; Hernández, Alberto; Lee, Kibaek
The estimation of pressure and temperature histories, which are required to understand chemical pathways in condensed phase explosives during detonation, is discussed. We argue that estimates made from continuum models, calibrated by macroscopic experiments, are essential to inform modern, atomistic-based reactive chemistry simulations at detonation pressures and temperatures. We present easy to implement methods for general equation of state and arbitrarily complex chemical reaction schemes that can be used to compute reactive flow histories for the constant volume, the energy process, and the expansion process on the Rayleigh line of a steady Chapman-Jouguet detonation. A brief review of state-of-the-art ofmore » two-component reactive flow models is given that highlights the Ignition and Growth model of Lee and Tarver [Phys. Fluids 23, 2362 (1980)] and the Wide-Ranging Equation of State model of Wescott, Stewart, and Davis [J. Appl. Phys. 98, 053514 (2005)]. We discuss evidence from experiments and reactive molecular dynamic simulations that motivate models that have several components, instead of the two that have traditionally been used to describe the results of macroscopic detonation experiments. We present simplified examples of a formulation for a hypothetical explosive that uses simple (ideal) equation of state forms and detailed comparisons. Then, we estimate pathways computed from two-component models of real explosive materials that have been calibrated with macroscopic experiments.« less
Mathematical Models of IABG Thermal-Vacuum Facilities
NASA Astrophysics Data System (ADS)
Doring, Daniel; Ulfers, Hendrik
2014-06-01
IABG in Ottobrunn, Germany, operates thermal-vacuum facilities of different sizes and complexities as a service for space-testing of satellites and components. One aspect of these tests is the qualification of the thermal control system that keeps all onboard components within their save operating temperature band. As not all possible operation / mission states can be simulated within a sensible test time, usually a subset of important and extreme states is tested at TV facilities to validate the thermal model of the satellite, which is then used to model all other possible mission states. With advances in the precision of customer thermal models, simple assumptions of the test environment (e.g. everything black & cold, one solar constant of light from this side) are no longer sufficient, as real space simulation chambers do deviate from this ideal. For example the mechanical adapters which support the spacecraft are usually not actively cooled. To enable IABG to provide a model that is sufficiently detailed and realistic for current system tests, Munich engineering company CASE developed ESATAN models for the two larger chambers. CASE has many years of experience in thermal analysis for space-flight systems and ESATAN. The two models represent the rather simple (and therefore very homogeneous) 3m-TVA and the extremely complex space simulation test facility and its solar simulator. The cooperation of IABG and CASE built up extensive knowledge of the facilities thermal behaviour. This is the key to optimally support customers with their test campaigns in the future. The ESARAD part of the models contains all relevant information with regard to geometry (CAD data), surface properties (optical measurements) and solar irradiation for the sun simulator. The temperature of the actively cooled thermal shrouds is measured and mapped to the thermal mesh to create the temperature field in the ESATAN part as boundary conditions. Both models comprise switches to easily establish multiple possible set-ups (e.g. exclude components like the motion system or enable / disable the solar simulator). Both models were validated by comparing calculated results (thermal balance temperatures for simple passive test articles) with measured temperatures generated in actual tests in these facilities. This paper presents information about the chambers, the modelling approach, properties of the models and their performance in the validation tests.
NASA Astrophysics Data System (ADS)
Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus
2017-07-01
Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
A study of the electric field in an open magnetospheric model
NASA Technical Reports Server (NTRS)
Stern, D. P.
1973-01-01
Recently, Svalgaard and Heppner reported two separate features of the polar electromagnetic field that correlate with the dawn-dusk component of the interplanetary magnetic field. This work attempts to explain these findings in terms of properties of the open magnetosphere. The topology and qualitative properties of the open magnetosphere are first studied by means of a simple model, consisting of a dipole in a constant field. Many such properties are found to depend on the separation line, a curve connecting neutral points and separating different field line regimes. In the simple model it turns out that the electric field in the central polar cap tends to point from dawn to dusk for a wide variety of external fields, but, near the boundary of the polar cap, electric equipotentials are deformed into crescents.
Rhythmic behavior in a two-population mean-field Ising model
NASA Astrophysics Data System (ADS)
Collet, Francesca; Formentin, Marco; Tovazzi, Daniele
2016-10-01
Many real systems composed of a large number of interacting components, as, for instance, neural networks, may exhibit collective periodic behavior even though single components have no natural tendency to behave periodically. Macroscopic oscillations are indeed one of the most common self-organized behavior observed in living systems. In the present paper we study some dynamical features of a two-population generalization of the mean-field Ising model with the scope of investigating simple mechanisms capable to generate rhythms in large groups of interacting individuals. We show that the system may undergo a transition from a disordered phase, where the magnetization of each population fluctuates closely around zero, to a phase in which they both display a macroscopic regular rhythm. In particular, there exists a region in the parameter space where having two groups of spins with inter- and intrapopulation interactions of different strengths suffices for the emergence of a robust periodic behavior.
A Simple Text File for Curing Rainbow Blindness
NASA Technical Reports Server (NTRS)
Krylo, Robert; Tomlin, Marilyn; Seager, Michael
2008-01-01
This slide presentation reviews the use of a simple text file to work with large, multi-component thermal models that present a post-processing challenge. This complexity is due to temperatures for many components, with varying requirements, need to be examined and that false color temperature maps, or rainbows, provide a qualitative assessment of results.
Radial Distribution Functions of Strongly Coupled Two-Temperature Plasmas
NASA Astrophysics Data System (ADS)
Shaffer, Nathaniel R.; Tiwari, Sanat Kumar; Baalrud, Scott D.
2017-10-01
We present tests of three theoretical models for the radial distribution functions (RDFs) in two-temperature strongly coupled plasmas. RDFs are useful in extending plasma thermodynamics and kinetic theory to strong coupling, but they are usually known only for thermal equilibrium or for approximate one-component model plasmas. Accurate two-component modeling is necessary to understand the impact of strong coupling on inter-species transport, e.g., ambipolar diffusion and electron-ion temperature relaxation. We demonstrate that the Seuferling-Vogel-Toeppfer (SVT) extension of the hypernetted chain equations not only gives accurate RDFs (as compared with classical molecular dynamics simulations), but also has a simple connection with the Yukawa OCP model. This connection gives a practical means to recover the structure of the electron background from knowledge of the ion-ion RDF alone. Using the model RDFs in Effective Potential Theory, we report the first predictions of inter-species transport coefficients of strongly coupled plasmas far from equilibrium. This work is supported by NSF Grant No. PHY-1453736, AFSOR Award No. FA9550-16-1-0221, and used XSEDE computational resources.
Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angus, G.W.; Gentile, G.; Diaferio, A.
2014-10-01
In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrinomore » ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Vincent K., E-mail: vincent.shen@nist.gov; Siderius, Daniel W.
2014-06-28
Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phasemore » transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called “breathing” of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.« less
NASA Astrophysics Data System (ADS)
Shen, Vincent K.; Siderius, Daniel W.
2014-06-01
Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phase transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called "breathing" of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.
State relations for a two-phase mixture of reacting explosives and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Shiro; Saburi, Tei; Ogata, Yuji
2007-10-15
To assess the assumptions behind the two phase mixture rule for reacting explosives, the shock-to-detonation transition process was calculated for high explosives using a finite difference method. An ignition and growth model and the Jones-Wilkins-Lee (JWL) equations of state were employed. The simple mixture rule assumes that the reacting explosive is a simple mixture of the reactant and product components. Four different assumptions, such as that of thermal equilibrium and isotropy, were adopted to calculate the pressure. The main purpose of this paper is to present the answer to the question of why the numerical results of shock-initiation are insensitivemore » to the assumptions adopted. The equations of state for reactants and products were assessed by considering plots of the specific internal energy E and specific volume V. If the slopes of the constant-pressure lines for both components in the E-V plane are almost the same, it is demonstrated that the numerical results are insensitive to the assumptions adopted. We have found that the relation for the specific volumes of the two components can be approximately expressed by a single curve of the specific volume of the reactant vs that of the products. We discuss this relationship in terms of the results of the numerical simulation. (author)« less
NASA Astrophysics Data System (ADS)
Tanaka, H. L.
2003-06-01
In this study, a numerical simulation of the Arctic Oscillation (AO) is conducted using a simple barotropic model that considers the barotropic-baroclinic interactions as the external forcing. The model is referred to as a barotropic S model since the external forcing is obtained statistically from the long-term historical data, solving an inverse problem. The barotropic S model has been integrated for 51 years under a perpetual January condition and the dominant empirical orthogonal function (EOF) modes in the model have been analyzed. The results are compared with the EOF analysis of the barotropic component of the real atmosphere based on the daily NCEP-NCAR reanalysis for 50 yr from 1950 to 1999.According to the result, the first EOF of the model atmosphere appears to be the AO similar to the observation. The annular structure of the AO and the two centers of action at Pacific and Atlantic are simulated nicely by the barotropic S model. Therefore, the atmospheric low-frequency variabilities have been captured satisfactorily even by the simple barotropic model.The EOF analysis is further conducted to the external forcing of the barotropic S model. The structure of the dominant forcing shows the characteristics of synoptic-scale disturbances of zonal wavenumber 6 along the Pacific storm track. The forcing is induced by the barotropic-baroclinic interactions associated with baroclinic instability.The result suggests that the AO can be understood as the natural variability of the barotropic component of the atmosphere induced by the inherent barotropic dynamics, which is forced by the barotropic-baroclinic interactions. The fluctuating upscale energy cascade from planetary waves and synoptic disturbances to the zonal motion plays the key role for the excitation of the AO.
Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.
Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B
2012-06-01
This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.
NASA Technical Reports Server (NTRS)
Bregman, Joel N.; Hogg, David E.; Roberts, Morton S.
1992-01-01
Interstellar components of early-type galaxies are established by galactic type and luminosity in order to search for relationships between the different interstellar components and to test the predictions of theoretical models. Some of the data include observations of neutral hydrogen, carbon monoxide, and radio continuum emission. An alternative distance model which yields LX varies as LB sup 2.45, a relation which is in conflict with simple cooling flow models, is discussed. The dispersion of the X-ray luminosity about this regression line is unlikely to result from stripping. The striking lack of clear correlations between hot and cold interstellar components, taken together with their morphologies, suggests that the cold gas is a disk phenomenon while the hot gas is a bulge phenomenon, with little interaction between the two. The progression of galaxy type from E to Sa is not only a sequence of decreasing stellar bulge-to-disk ratio, but also of hot-to-cold-gas ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ninokata, H.; Deguchi, A.; Kawahara, A.
1995-09-01
A new void drift model for the subchannel analysis method is presented for the thermohydraulics calculation of two-phase flows in rod bundles where the flow model uses a two-fluid formulation for the conservation of mass, momentum and energy. A void drift model is constructed based on the experimental data obtained in a geometrically simple inter-connected two circular channel test sections using air-water as working fluids. The void drift force is assumed to be an origin of void drift velocity components of the two-phase cross-flow in a gap area between two adjacent rods and to overcome the momentum exchanges at themore » phase interface and wall-fluid interface. This void drift force is implemented in the cross flow momentum equations. Computational results have been successfully compared to experimental data available including 3x3 rod bundle data.« less
A global model for steady state and transient S.I. engine heat transfer studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohac, S.V.; Assanis, D.N.; Baker, D.M.
1996-09-01
A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less
Personalized models of bones based on radiographic photogrammetry.
Berthonnaud, E; Hilmi, R; Dimnet, J
2009-07-01
The radiographic photogrammetry is applied, for locating anatomical landmarks in space, from their two projected images. The goal of this paper is to define a personalized geometric model of bones, based uniquely on photogrammetric reconstructions. The personalized models of bones are obtained from two successive steps: their functional frameworks are first determined experimentally, then, the 3D bone representation results from modeling techniques. Each bone functional framework is issued from direct measurements upon two radiographic images. These images may be obtained using either perpendicular (spine and sacrum) or oblique incidences (pelvis and lower limb). Frameworks link together their functional axes and punctual landmarks. Each global bone volume is decomposed in several elementary components. Each volumic component is represented by simple geometric shapes. Volumic shapes are articulated to the patient's bone structure. The volumic personalization is obtained by best fitting the geometric model projections to their real images, using adjustable articulations. Examples are presented to illustrating the technique of personalization of bone volumes, directly issued from the treatment of only two radiographic images. The chosen techniques for treating data are then discussed. The 3D representation of bones completes, for clinical users, the information brought by radiographic images.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
Calculation of TIR Canopy Hot Spot and Implications for Earth Radiation Budget
NASA Technical Reports Server (NTRS)
Smith, J. A.; Ballard, J. R., Jr.
2000-01-01
Using a 3-D model for thermal infrared exitance and the Lowtran 7 atmospheric radiative transfer model, we compute the variation in brightness temperature with view direction and, in particular, the canopy thermal hot spot. We then perform a sensitivity analysis of surface energy balance components for a nominal case using a simple SVAT model given the uncertainty in canopy temperature arising from the thermal hot spot effect. Canopy thermal hot spot variations of two degrees C lead to differences of plus or minus 24% in the midday available energy.
Thermodynamics of Thomas-Fermi screened Coulomb systems
NASA Technical Reports Server (NTRS)
Firey, B.; Ashcroft, N. W.
1977-01-01
We obtain in closed analytic form, estimates for the thermodynamic properties of classical fluids with pair potentials of Yukawa type, with special reference to dense fully ionized plasmas with Thomas-Fermi or Debye-Hueckel screening. We further generalize the hard-sphere perturbative approach used for similarly screened two-component mixtures, and demonstrate phase separation in this simple model of a liquid mixture of metallic helium and hydrogen.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
The contribution of the diffuse light component to the topographic effect on remotely sensed data
NASA Technical Reports Server (NTRS)
Justice, C.; Holben, B.
1980-01-01
The topographic effect is measured by the difference between the global radiance from inclined surfaces as a function of their orientation relative to the sensor position and light source. The short wave radiant energy incident on a surface is composed of direct sunlight, scattered skylight, and light reflected from surrounding terrain. The latter two components are commonly known as the diffuse component. The contribution of the diffuse light component to the topographic effect was examined and the significance of this diffuse component with respect to two direct radiance models was assessed. Diffuse and global spectral radiances were measured for a series of slopes and aspects of a uniform and surface in the red and photographic infrared parts of the spectrum, using a nadir pointing two channel handheld radiometer. The diffuse light was found to produce a topographic effect which varied from the topographic effect for direct light. The topographic effect caused by diffuse light was found to increase slightly with solar elevation and wavelength for the channels examined. The correlations between data derived from two simple direct radiance simulation models and the field data were not significantly affected when the diffuse component was removed from the radiances. Radiances from a 60 percent reflective surface, assuming no atmospheric path radiance, the diffuse light topographic effect contributed a maximum range of 3 pixel values in simulated LANDSAT data from all aspects with slopes up to 30 degrees.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Analytical models for coupling reliability in identical two-magnet systems during slow reversals
NASA Astrophysics Data System (ADS)
Kani, Nickvash; Naeemi, Azad
2017-12-01
This paper follows previous works which investigated the strength of dipolar coupling in two-magnet systems. While those works focused on qualitative analyses, this manuscript elucidates reversal through dipolar coupling culminating in analytical expressions for reversal reliability in identical two-magnet systems. The dipolar field generated by a mono-domain magnetic body can be represented by a tensor containing both longitudinal and perpendicular field components; this field changes orientation and magnitude based on the magnetization of neighboring nanomagnets. While the dipolar field does reduce to its longitudinal component at short time-scales, for slow magnetization reversals, the simple longitudinal field representation greatly underestimates the scope of parameters that ensure reliable coupling. For the first time, analytical models that map the geometric and material parameters required for reliable coupling in two-magnet systems are developed. It is shown that in biaxial nanomagnets, the x ̂ and y ̂ components of the dipolar field contribute to the coupling, while all three dimensions contribute to the coupling between a pair of uniaxial magnets. Additionally, the ratio of the longitudinal and perpendicular components of the dipolar field is also very important. If the perpendicular components in the dipolar tensor are too large, the nanomagnet pair may come to rest in an undesirable meta-stable state away from the free axis. The analytical models formulated in this manuscript map the minimum and maximum parameters for reliable coupling. Using these models, it is shown that there is a very small range of material parameters which can facilitate reliable coupling between perpendicular-magnetic-anisotropy nanomagnets; hence, in-plane nanomagnets are more suitable for coupled systems.
Three-dimensional axisymmetric sources for Majumdar-Papapetrou type spacetimes
NASA Astrophysics Data System (ADS)
García-Reyes, Gonzalo; Hernández-Gómez, Kevin A.
From Newtonian potential-density pairs, we construct three-dimensional axisymmetric relativistic sources for a Majumdar-Papapetrou type conformastatic spacetime. As simple examples, we build two families of relativistic thick disks from the first two Miyamoto-Nagai potential-density pairs used in Newtonian gravity to model flat galaxies, and a three-component relativistic model of galaxy (bulge, disk and dark matter halo). We study the equatorial circular motion of test particles around such structures. Also the stability of the orbits is analyzed for radial perturbation using an extension of the Rayleigh criterion. In all examples, the relativistic effects are analyzed and compared with the Newtonian approximation. The models are considered satisfying all the energy conditions.
An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.
Ilias, Miroslav; Saue, Trond
2007-02-14
The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.
Two arm robot path planning in a static environment using polytopes and string stretching. Thesis
NASA Technical Reports Server (NTRS)
Schima, Francis J., III
1990-01-01
The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.
The molecular basis of ethylene signalling in Arabidopsis
NASA Technical Reports Server (NTRS)
Woeste, K.; Kieber, J. J.; Evans, M. L. (Principal Investigator)
1998-01-01
The simple gas ethylene profoundly influences plants at nearly every stage of growth and development. In the past ten years, the use of a genetic approach, based on the triple response phenotype, has been a powerful tool for investigating the molecular events that underlie these effects. Several fundamental elements of the pathway have been described: a receptor with homology to bacterial two-component histidine kinases (ETR1), elements of a MAP kinase cascade (CTR1) and a putative transcription factor (EIN3). Taken together, these elements can be assembled into a simple, linear model for ethylene signalling that accounts for most of the well-characterized ethylene mediated responses.
Vapor mediated droplet interactions - models and mechanisms (Part 2)
NASA Astrophysics Data System (ADS)
Benusiglio, Adrien; Cira, Nate; Prakash, Manu
2014-11-01
When deposited on clean glass a two-component binary mixture of propylene glycol and water is energetically inclined to spread, as both pure liquids do. Instead the mixture forms droplets stabilized by evaporation induced surface tension gradients, giving them unique properties such as negligible hysteresis. When two of these special droplets are deposited several radii apart they attract each other. The vapor from one droplet destabilizes the other, resulting in an attraction force which brings both droplets together. We present a flux-based model for droplet stabilization and a model which connects the vapor profile to net force. These simple models capture the static and dynamic experimental trends, and our fundamental understanding of these droplets and their interactions allowed us to build autonomous fluidic machines.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
Precipitation-centered Conceptual Model for Sub-humid Uplands in Lampasas Cut Plains, TX
NASA Astrophysics Data System (ADS)
Potter, S. R.; Tu, M.; Wilcox, B. P.
2011-12-01
Conceptual understandings of dominant hydrological processes, system interactions and feedbacks, and external forcings operating within catchments often defy simple definition and explanation, especially catchments encompassing transition zones, degraded landscapes, rapid development, and where climate forcings exhibit large variations across time and space. However, it is precisely those areas for which understanding and knowledge are most needed to innovate sustainable management strategies and counter past management blunders and failed restoration efforts. The cut plain of central Texas is one such area. Complex geographic and climatic factors lead to spatially and temporally variable precipitation having frequent dry periods interrupted by intense high-volume precipitation. Fort Hood, an army post located in the southeast cut plain contains landscapes ranging from highly degraded to nearly pristine with a topography mainly comprised of flat-topped mesas separated by broad u-shaped valleys. To understand the hydrology of the area and responses to wet-dry cycles we analyzed 4-years of streamflow and rainfall from 8 catchments, sized between 1819 and 16,000 ha. Since aquifer recharge/discharge and surface stream-groundwater interactions are unimportant, we hypothesized a simple conceptual model driven by precipitation and radiative forcings and having stormflow, baseflow, ET, and two hypothetical storage components. The key storage component was conceptualized as a buffer that was highly integrated with the ET component and exerted controls on baseflow. Radiative energy controlled flux from the buffer to ET. We used the conceptual model in making a bimonthly hydrologic budget, which included buffer volumes and a deficit-surplus indicator. Through the analysis, we were led to speculate that buffer capacity plays key roles in these landscapes and even relatively minor changes in capacity, due to soil compaction for example, might lead to ecological shifts. The model led us to other hypotheses concerning stormflow mechanisms and controls on baseflow, which we then tested against observations. It was instructive that such a simple model could lead to interesting new theories.
Estillore, Armando D; Morris, Holly S; Or, Victor W; Lee, Hansol D; Alves, Michael R; Marciano, Meagan A; Laskina, Olga; Qin, Zhen; Tivanski, Alexei V; Grassian, Vicki H
2017-08-09
Individual airborne sea spray aerosol (SSA) particles show diversity in their morphologies and water uptake properties that are highly dependent on the biological, chemical, and physical processes within the sea subsurface and the sea surface microlayer. In this study, hygroscopicity data for model systems of organic compounds of marine origin mixed with NaCl are compared to data for authentic SSA samples collected in an ocean-atmosphere facility providing insights into the SSA particle growth, phase transitions and interactions with water vapor in the atmosphere. In particular, we combine single particle morphology analyses using atomic force microscopy (AFM) with hygroscopic growth measurements in order to provide important insights into particle hygroscopicity and the surface microstructure. For model systems, a range of simple and complex carbohydrates were studied including glucose, maltose, sucrose, laminarin, sodium alginate, and lipopolysaccharides. The measured hygroscopic growth was compared with predictions from the Extended-Aerosol Inorganics Model (E-AIM). It is shown here that the E-AIM model describes well the deliquescence transition and hygroscopic growth at low mass ratios but not as well for high ratios, most likely due to a high organic volume fraction. AFM imaging reveals that the equilibrium morphology of these single-component organic particles is amorphous. When NaCl is mixed with the organics, the particles adopt a core-shell morphology with a cubic NaCl core and the organics forming a shell similar to what is observed for the authentic SSA samples. The observation of such core-shell morphologies is found to be highly dependent on the salt to organic ratio and varies depending on the nature and solubility of the organic component. Additionally, single particle organic volume fraction AFM analysis of NaCl : glucose and NaCl : laminarin mixtures shows that the ratio of salt to organics in solution does not correspond exactly for individual particles - showing diversity within the ensemble of particles produced even for a simple two component system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammon, M.; Shalchi, A., E-mail: andreasm4@yahoo.com
2017-10-01
In several astrophysical applications one needs analytical forms of cosmic-ray diffusion parameters. Some examples are studies of diffusive shock acceleration and solar modulation. In the current article we explore perpendicular diffusion based on the unified nonlinear transport theory. While we focused on magnetostatic turbulence in Paper I, we included the effect of dynamical turbulence in Paper II of the series. In the latter paper we assumed that the temporal correlation time does not depend on the wavenumber. More realistic models have been proposed in the past, such as the so-called damping model of dynamical turbulence. In the present paper wemore » derive analytical forms for the perpendicular diffusion coefficient of energetic particles in two-component turbulence for this type of time-dependent turbulence. We present new formulas for the perpendicular diffusion coefficient and we derive a condition for which the magnetostatic result is recovered.« less
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1995-12-31
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length these design procedures are proving to be inadequate. The result has either been costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two mine faces.
Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities
ERIC Educational Resources Information Center
Graff, Richard B.; Green, Gina
2004-01-01
Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…
Theoretical and observational analysis of spacecraft fields
NASA Technical Reports Server (NTRS)
Neubauer, F. M.; Schatten, K. H.
1972-01-01
In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.
1977-08-01
The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less
NASA Technical Reports Server (NTRS)
Chien, C. H.; Swinson, W. F.; Turner, J. L.; Moslehy, F. A.; Ranson, W. F.
1980-01-01
A method for measuring in-plane displacement of a rotating structure by using two laser speckle photographs is described. From the displacement measurements one can calculate strains and stresses due to a centrifugal load. This technique involves making separate speckle photographs of a test model. One photograph is made with the model loaded (model is rotating); the second photograph is made with no load on the model (model is stationary). A sandwich is constructed from the two speckle photographs and data are recovered in a manner similar to that used with conventional speckle photography. The basic theory, experimental procedures of this method, and data analysis of a simple rotating specimen are described. In addition the measurement of in-plane surface displacement components of a deformed solid, and the application of the coupled laser speckle interferometry and boundary-integral solution technique to two dimensional elasticity problems are addressed.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
3D Boolean operations in virtual surgical planning.
Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun
2017-10-01
Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.
Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo
2013-08-01
Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.
Kinematic analysis of asymmetric folds in competent layers using mathematical modelling
NASA Astrophysics Data System (ADS)
Aller, J.; Bobillo-Ares, N. C.; Bastida, F.; Lisle, R. J.; Menéndez, C. O.
2010-08-01
Mathematical 2D modelling of asymmetric folds is carried out by applying a combination of different kinematic folding mechanisms: tangential longitudinal strain, flexural flow and homogeneous deformation. The main source of fold asymmetry is discovered to be due to the superimposition of a general homogeneous deformation on buckle folds that typically produces a migration of the hinge point. Forward modelling is performed mathematically using the software 'FoldModeler', by the superimposition of simple shear or a combination of simple shear and irrotational strain on initial buckle folds. The resulting folds are Ramsay class 1C folds, comparable to those formed by symmetric flattening, but with different length of limbs and layer thickness asymmetry. Inverse modelling is made by fitting the natural fold to a computer-simulated fold. A problem of this modelling is the search for the most appropriate homogeneous deformation to be superimposed on the initial fold. A comparative analysis of the irrotational and rotational deformations is made in order to find the deformation which best simulates the shapes and attitudes of natural folds. Modelling of recumbent folds suggests that optimal conditions for their development are: a) buckling in a simple shear regime with a sub-horizontal shear direction and layering gently dipping towards this direction; b) kinematic amplification due to superimposition of a combination of simple shear and irrotational strain with a sub-vertical maximum shortening direction for the latter component. The modelling shows that the amount of homogeneous strain necessary for the development of recumbent folds is much less when an irrotational strain component is superimposed at this stage that when the superimposed strain is only simple shear. In nature, the amount of the irrotational strain component probably increases during the development of the fold as a consequence of the increasing influence of the gravity due to the tectonic superimposition of rocks.
A gravitational lens candidate discovered with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Maoz, Dan; Bahcall, John N.; Schneider, Donald P.; Doxsey, Rodger; Bahcall, Neta A.; Filippenko, Alexei V.; Goss, W. M.; Lahav, Ofer; Yanny, Brian
1992-01-01
Evidence is reported for gravitational lensing of the high-redshift (z = 3.8) quasar 1208 + 101, observed as part of the Snapshot survey with the HST Planetary Camera. An HST V image taken on gyroscopes resolves the quasar into three point-source components, with the two fainter images having separations of 0.1 and 0.5 arcsec from the central bright component. A radio observation of the quasar with the VLA at 2 cm shows that, like most quasars of this redhsift, 1208 + 101 is radio quiet. Based on positional information alone, the probability that the observed optical components are chance superpositions of Galactic stars is small, but not negligible. Analysis of a combined ground-based spectrum of all three components, using the relative brightnesses of the HST image, supports the lensing hypothesis. If all the components are lensed images of the quasar, the observed configuration cannot be reproduced by simple lens models.
A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid
2017-10-01
A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.
Coevolution of Glauber-like Ising dynamics and topology
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Fortunato, Santo; Castellano, Claudio
2009-11-01
We study the coevolution of a generalized Glauber dynamics for Ising spins with tunable threshold and of the graph topology where the dynamics takes place. This simple coevolution dynamics generates a rich phase diagram in the space of the two parameters of the model, the threshold and the rewiring probability. The diagram displays phase transitions of different types: spin ordering, percolation, and connectedness. At variance with traditional coevolution models, in which all spins of each connected component of the graph have equal value in the stationary state, we find that, for suitable choices of the parameters, the system may converge to a state in which spins of opposite sign coexist in the same component organized in compact clusters of like-signed spins. Mean field calculations enable one to estimate some features of the phase diagram.
Anomalous torsional tripling in the ν9 and ν10 CH3-deformation modes of ethane 12CH313CH3
NASA Astrophysics Data System (ADS)
Lattanzi, F.; di Lauro, C.
2017-12-01
We have investigated the anomalous torsional behavior in the coupled ν9 and ν10 vibrational fundamentals of 12CH313CH3, both states exhibiting a splitting into three components, instead of two, only in those rotational levels which are very close to resonance. We conclude that the intrinsic additional splitting, which occurs in the E-torsional components, for these two vibrational states is too small to be detected in the high resolution infrared spectrum, but it is substantively enhanced by their coupling. It is shown that this effect requires the simultaneous action of torsion independent operators, such as Fermi-type and z-Coriolis, not allowed in the more symmetric isotopologue 12CH312CH3, and torsion dependent operators, such as torsional-Coriolis, connecting the two vibrational states. Our conclusions lead to a simple model for the coupling of ν9 and ν10, with effective Fermi-type matrix elements W for the A-torsional components, and W ± w for the two pairs of E-torsional components. This causes the additional splitting in the E-pairs. This model is consistent with the mechanism causing the Coriolis-dependent decrease of the A-E torsional splitting in degenerate vibrational states. Exploratory calculations were performed making use of results from a normal mode analysis, showing that the effects predictable by the proposed model are of the correct order of magnitude compared to the observed features, with coupling parameter values reasonably consistent with those determined by the least squares fit of the observed transition wavenumbers.
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Robust encoding of stimulus identity and concentration in the accessory olfactory system.
Arnson, Hannah A; Holy, Timothy E
2013-08-14
Sensory systems represent stimulus identity and intensity, but in the neural periphery these two variables are typically intertwined. Moreover, stable detection may be complicated by environmental uncertainty; stimulus properties can differ over time and circumstance in ways that are not necessarily biologically relevant. We explored these issues in the context of the mouse accessory olfactory system, which specializes in detection of chemical social cues and infers myriad aspects of the identity and physiological state of conspecifics from complex mixtures, such as urine. Using mixtures of sulfated steroids, key constituents of urine, we found that spiking responses of individual vomeronasal sensory neurons encode both individual compounds and mixtures in a manner consistent with a simple model of receptor-ligand interactions. Although typical neurons did not accurately encode concentration over a large dynamic range, from population activity it was possible to reliably estimate the log-concentration of pure compounds over several orders of magnitude. For binary mixtures, simple models failed to accurately segment the individual components, largely because of the prevalence of neurons responsive to both components. By accounting for such overlaps during model tuning, we show that, from neuronal firing, one can accurately estimate log-concentration of both components, even when tested across widely varying concentrations. With this foundation, the difference of logarithms, log A - log B = log A/B, provides a natural mechanism to accurately estimate concentration ratios. Thus, we show that a biophysically plausible circuit model can reconstruct concentration ratios from observed neuronal firing, representing a powerful mechanism to separate stimulus identity from absolute concentration.
NASA Astrophysics Data System (ADS)
Raffray, A. René; Federici, Gianfranco
1997-04-01
RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.
Monticello - A glass-rich howardite
NASA Technical Reports Server (NTRS)
Olsen, Edward J.; Dod, Bruce D.; Schmitt, Roman A.; Sipiera, Paul P.
1987-01-01
Monticello is a new howardite similar to Malvern in that it contains abundant (15 percent) glass fragments, which show a range of compositions from olivine-normative to quartz-normative. Like Kapoeta, it contains pyroxene grains that range up to highly magnesian compositions, Fs16. Because their pyroxenes are more magnesian than those occurring in diogenites, Monticello and Kapoeta are exceptions to the simple two-component mixing model in which howardites are considered to be mechanical mixtures of fragmented eucrites and diogenites. Monticello also contains clasts of what appear to be a cumulate eucrite and a noncumulate eucrite, as well as a radiating pyroxene chondrule from a chondrite. Monticello is a regolith breccia containing more evolved components than are usually considered in eucrite-diogenite genesis models. As such, it supports those models that involve reworking of a complex parent body crust rather than straightforward partial melting of primitive chondritic parent material.
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1997-01-01
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length, these design procedures are proving to be inadequate. The result has either been a costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC, this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two minemore » faces.« less
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
Toward a molecular programming language for algorithmic self-assembly
NASA Astrophysics Data System (ADS)
Patitz, Matthew John
Self-assembly is the process whereby relatively simple components autonomously combine to form more complex objects. Nature exhibits self-assembly to form everything from microscopic crystals to living cells to galaxies. With a desire to both form increasingly sophisticated products and to understand the basic components of living systems, scientists have developed and studied artificial self-assembling systems. One such framework is the Tile Assembly Model introduced by Erik Winfree in 1998. In this model, simple two-dimensional square 'tiles' are designed so that they self-assemble into desired shapes. The work in this thesis consists of a series of results which build toward the future goal of designing an abstracted, high-level programming language for designing the molecular components of self-assembling systems which can perform powerful computations and form into intricate structures. The first two sets of results demonstrate self-assembling systems which perform infinite series of computations that characterize computably enumerable and decidable languages, and exhibit tools for algorithmically generating the necessary sets of tiles. In the next chapter, methods for generating tile sets which self-assemble into complicated shapes, namely a class of discrete self-similar fractal structures, are presented. Next, a software package for graphically designing tile sets, simulating their self-assembly, and debugging designed systems is discussed. Finally, a high-level programming language which abstracts much of the complexity and tedium of designing such systems, while preventing many of the common errors, is presented. The summation of this body of work presents a broad coverage of the spectrum of desired outputs from artificial self-assembling systems and a progression in the sophistication of tools used to design them. By creating a broader and deeper set of modular tools for designing self-assembling systems, we hope to increase the complexity which is attainable. These tools provide a solid foundation for future work in both the Tile Assembly Model and explorations into more advanced models.
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Mathematical, numerical and experimental analysis of the swirling flow at a Kaplan runner outlet
NASA Astrophysics Data System (ADS)
Muntean, S.; Ciocan, T.; Susan-Resiga, R. F.; Cervantes, M.; Nilsson, H.
2012-11-01
The paper presents a novel mathematical model for a-priori computation of the swirling flow at Kaplan runners outlet. The model is an extension of the initial version developed by Susan-Resiga et al [1], to include the contributions of non-negligible radial velocity and of the variable rothalpy. Simple analytical expressions are derived for these additional data from three-dimensional numerical simulations of the Kaplan turbine. The final results, i.e. velocity components profiles, are validated against experimental data at two operating points, with the same Kaplan runner blades opening, but variable discharge.
Redefining plant functional types for forests based on plant traits
NASA Astrophysics Data System (ADS)
Wei, L.; Xu, C.; Christoffersen, B. O.; McDowell, N. G.; Zhou, H.
2016-12-01
Our ability to predict forest mortality is limited by the simple plant functional types (PFTs) in current generations of Earth System models (ESMs). For example, forests were formerly separated into PFTs only based on leaf form and phenology across different regions (arctic, temperate, and tropic areas) in the Community Earth System Model (CESM). This definition of PFTs ignored the large variation in vulnerability of species to drought and shade tolerance within each PFT. We redefined the PFTs for global forests based on plant traits including phenology, wood density, leaf mass per area, xylem-specific conductivity, and xylem pressure at 50% loss of conductivity. Species with similar survival strategies were grouped into the same PFT. New PFTs highlighted variation in vulnerability and physiological adaptation to drought and shade. New PFTs were better clustered than old ones in the two-dimensional plane of the first two principle components in a principle component analysis. We expect that the new PFTs will strengthen ESMs' ability on predicting drought-induced mortality in the future.
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
Inference and Explanation in Counterfactual Reasoning
ERIC Educational Resources Information Center
Rips, Lance J.; Edwards, Brian J.
2013-01-01
This article reports results from two studies of how people answer counterfactual questions about simple machines. Participants learned about devices that have a specific configuration of components, and they answered questions of the form "If component X had not operated [failed], would component Y have operated?" The data from these…
pyhector: A Python interface for the simple climate model Hector
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
2017-04-01
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
Simple two-electrode biosignal amplifier.
Dobrev, D; Neycheva, T; Mudrov, N
2005-11-01
A simple, cost effective circuit for a two-electrode non-differential biopotential amplifier is proposed. It uses a 'virtual ground' transimpedance amplifier and a parallel RC network for input common mode current equalisation, while the signal input impedance preserves its high value. With this innovative interface circuit, a simple non-inverting amplifier fully emulates high CMRR differential. The amplifier equivalent CMRR (typical range from 70-100 dB) is equal to the open loop gain of the operational amplifier used in the transimpedance interface stage. The circuit has very simple structure and utilises a small number of popular components. The amplifier is intended for use in various two-electrode applications, such as Holter-type monitors, defibrillators, ECG monitors, biotelemetry devices etc.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Regression Models for Identifying Noise Sources in Magnetic Resonance Images
Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.
2009-01-01
Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1982-01-01
The derivation of a simple expression for the capacitance C(V) associated with the transition region of a p-n junction under a forward bias is derived by phenomenological reasoning. The treatment of C(V) is based on the conventional Shockley equations, and simpler expressions for C(V) result that are in general accord with the previous analytical and numerical results. C(V) consists of two components resulting from changes in majority carrier concentration and from free hole and electron accumulation in the space-charge region. The space-charge region is conceived as the intrinsic region of an n-i-p structure for a space-charge region markedly wider than the extrinsic Debye lengths at its edges. This region is excited in the sense that the forward bias creates hole and electron densities orders of magnitude larger than those in equilibrium. The recent Shirts-Gordon (1979) modeling of the space-charge region using a dielectric response function is contrasted with the more conventional Schottky-Shockley modeling.
Russ, Stefanie
2014-08-01
It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001); Sens. Actuators B 72, 239 (2001).], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.
Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement
NASA Astrophysics Data System (ADS)
Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.
2015-10-01
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
NASA Astrophysics Data System (ADS)
Temelkov, K. A.; Slaveeva, S. I.; Fedchenko, Yu I.; Chernogorova, T. P.
2018-03-01
Using the well-known Wassiljewa equation and a new simple method, the thermal conductivities of various 2- and 3-component gas mixtures were calculated and compared under gas-discharge conditions optimal for two prospective lasers excited in a nanosecond pulsed longitudinal discharge. By solving the non-stationary heat-conduction equation for electrons, a 2D numerical model was also developed for determination of the radial and temporal dependences of the electron temperature Te (r, t).
Camargo, Manuel; Téllez, Gabriel
2008-04-07
The renormalized charge of a simple two-dimensional model of colloidal suspension was determined by solving the hypernetted chain approximation and Ornstein-Zernike equations. At the infinite dilution limit, the asymptotic behavior of the correlation functions is used to define the effective interactions between the components of the system and these effective interactions were compared to those derived from the Poisson-Boltzmann theory. The results we obtained show that, in contrast to the mean-field theory, the renormalized charge does not saturate, but exhibits a maximum value and then decays monotonically as the bare charge increases. The results also suggest that beyond the counterion layer near to the macroion surface, the ionic cloud is not a diffuse layer which can be handled by means of the linearized theory, as the two-state model claims, but a more complex structure is settled by the correlations between microions.
Effect of inlet conditions for numerical modelling of the urban boundary layer
NASA Astrophysics Data System (ADS)
Gnatowska, Renata
2018-01-01
The paper presents the numerical results obtained with the use of the ANSYS FLUENT commercial code for analysing the flow structure around two rectangular inline surface-mounted bluff bodies immersed in a boundary layer. The effects of the inflow boundary layer for the accuracy of the numerical modelling of the flow field around a simple system of objects are described. The analysis was performed for two concepts. In the former case, the inlet velocity profile was defined using the power law, whereas the kinetic and dissipation energy was defined from the equations according to Richards and Hoxey [1]. In the latter case, the inlet conditions were calculated for the flow over the rough area composed of the rectangular components.
Analysis of Wind Tunnel Longitudinal Static and Oscillatory Data of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.; Curry, Timothy J.; Brandon, Jay M.
1997-01-01
Static and oscillatory wind tunnel data are presented for a 10-percent-scale model of an F-16XL aircraft. Static data include the effect of angle of attack, sideslip angle, and control surface deflections on aerodynamic coefficients. Dynamic data from small-amplitude oscillatory tests are presented at nominal values of angle of attack between 20 and 60 degrees. Model oscillations were performed at five frequencies from 0.6 to 2.9 Hz and one amplitude of 5 degrees. A simple harmonic analysis of the oscillatory data provided Fourier coefficients associated with the in-phase and out-of-phase components of the aerodynamic coefficients. A strong dependence of the oscillatory data on frequency led to the development of models with unsteady terms in the form of indicial functions. Two models expressing the variation of the in-phase and out-of-phase components with angle of attack and frequency were proposed and their parameters estimated from measured data.
Analysis of seismograms from a downhole array in sediments near San Francisco Bay
Joyner, William B.; Warrick, Richard E.; Oliver, Adolph A.
1976-01-01
A four-level downhole array of three-component instruments was established on the southwest shore of San Francisco Bay to monitor the effect of the sediments on low-amplitude seismic ground motion. The deepest instrument is at a depth of 186 meters, two meters below the top of the Franciscan bedrock. Earthquake data from regional distances (29 km ≤ Δ ≤ 485 km) over a wide range of azimuths are compared with the predictions of a simple plane-layered model with material properties independently determined. Spectral ratios between the surface and bedrock computed for the one horizontal component of motion that was analyzed agree rather well with the model predictions; the model predicts the frequencies of the first three peaks within 10 percent in most cases and the height of the peaks within 50 percent in most cases. Surface time histories computed from the theoretical model predict the time variations of amplitude and frequency content reasonably well, but correlations of individual cycles cannot be made between observed and predicted traces.
A simple geometrical model describing shapes of soap films suspended on two rings
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.
2016-09-01
We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.
Chatterjee, Abhijit; Vlachos, Dionisios G
2007-07-21
While recently derived continuum mesoscopic equations successfully bridge the gap between microscopic and macroscopic physics, so far they have been derived only for simple lattice models. In this paper, general deterministic continuum mesoscopic equations are derived rigorously via nonequilibrium statistical mechanics to account for multiple interacting surface species and multiple processes on multiple site types and/or different crystallographic planes. Adsorption, desorption, reaction, and surface diffusion are modeled. It is demonstrated that contrary to conventional phenomenological continuum models, microscopic physics, such as the interaction potential, determines the final form of the mesoscopic equation. Models of single component diffusion and binary diffusion of interacting particles on single-type site lattice and of single component diffusion on complex microporous materials' lattices consisting of two types of sites are derived, as illustrations of the mesoscopic framework. Simplification of the diffusion mesoscopic model illustrates the relation to phenomenological models, such as the Fickian and Maxwell-Stefan transport models. It is demonstrated that the mesoscopic equations are in good agreement with lattice kinetic Monte Carlo simulations for several prototype examples studied.
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser
NASA Technical Reports Server (NTRS)
Monson, D. J.
1977-01-01
The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.
Development and characterisation of a novel three-dimensional inter-kingdom wound biofilm model.
Townsend, Eleanor M; Sherry, Leighann; Rajendran, Ranjith; Hansom, Donald; Butcher, John; Mackay, William G; Williams, Craig; Ramage, Gordon
2016-11-01
Chronic diabetic foot ulcers are frequently colonised and infected by polymicrobial biofilms that ultimately prevent healing. This study aimed to create a novel in vitro inter-kingdom wound biofilm model on complex hydrogel-based cellulose substrata to test commonly used topical wound treatments. Inter-kingdom triadic biofilms composed of Candida albicans, Pseudomonas aeruginosa, and Staphylococcus aureus were shown to be quantitatively greater in this model compared to a simple substratum when assessed by conventional culture, metabolic dye and live dead qPCR. These biofilms were both structurally complex and compositionally dynamic in response to topical therapy, so when treated with either chlorhexidine or povidone iodine, principal component analysis revealed that the 3-D cellulose model was minimally impacted compared to the simple substratum model. This study highlights the importance of biofilm substratum and inclusion of relevant polymicrobial and inter-kingdom components, as these impact penetration and efficacy of topical antiseptics.
The effects of magnetic B(y) component on geomagnetic tail equilibria
NASA Technical Reports Server (NTRS)
Hilmer, Robert V.; Voigt, Gerd-Hannes
1987-01-01
A two-dimensional linear magnetohydrostatic model of the magnetotail is developed here in order to investigate the effects of a significant B(y) component on the configuration of magnetotail equilibria. It is concluded that the enhanced B(y) values must be an essential part of the quiet magnetotail and do not result from a simple intrusion of the IMF. The B(y) field consists of a constant background component plus a nonuniform field existing only in the plasma sheet, where it is dependent on the plasma paramater beta and the strength of the magnetic B(z) component. B(y) is strongest at the neutral sheet and decreases monotonically in the + or - z direction, reaching a constant tail lobe value at the plasma sheet boundaries. The presence of a significant positive B(y) component produces currents, including field-aligned currents, that flow through the equatorial plane and toward and away from earth in the northern and southern halves of the plasma sheet, respectively.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Computer modeling and simulation in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrory, R.L.; Verdon, C.P.
1989-03-01
The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less
Design and fabrication of a hybrid maglev model employing PML and SML
NASA Astrophysics Data System (ADS)
Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.
2017-10-01
A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.
The heliocentric evolution of cometary infrared spectra - Results from an organic grain model
NASA Technical Reports Server (NTRS)
Chyba, Christopher F.; Sagan, Carl; Mumma, Michael J.
1989-01-01
An emission feature peaking near 3.4 microns that is typical of C-H stretching in hydrocarbons and which fits a simple, two-component thermal emission model for dust in the cometary coma, has been noted in observations of Comets Halley and Wilson. A noteworthy consequence of this modeling is that, at about 1 AU, emission features at wavelengths longer than 3.4 microns come to be 'diluted' by continuum emission. A quantitative development of the model shows it to agree with observational data for Comet Halley for certain, plausible values of the optical constants; the observed heliocentric evolution of the 3.4-micron feature thereby furnishes information on the composition of the comet's organic grains.
Modeling and Analysis of Ultrarelativistic Heavy Ion Collisions
NASA Astrophysics Data System (ADS)
McCormack, William; Pratt, Scott
2014-09-01
High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. An MSU REU Project.
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Modular Bundle Adjustment for Photogrammetric Computations
NASA Astrophysics Data System (ADS)
Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.
2018-05-01
In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.
Large liquid rocket engine transient performance simulation system
NASA Technical Reports Server (NTRS)
Mason, J. R.; Southwick, R. D.
1989-01-01
Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Foreground Bias from Parametric Models of Far-IR Dust Emission
NASA Technical Reports Server (NTRS)
Kogut, A.; Fixsen, D. J.
2016-01-01
We use simple toy models of far-IR dust emission to estimate the accuracy to which the polarization of the cosmic microwave background can be recovered using multi-frequency fits, if the parametric form chosen for the fitted dust model differs from the actual dust emission. Commonly used approximations to the far-IR dust spectrum yield CMB residuals comparable to or larger than the sensitivities expected for the next generation of CMB missions, despite fitting the combined CMB plus foreground emission to precision 0.1 percent or better. The Rayleigh-Jeans approximation to the dust spectrum biases the fitted dust spectral index by (Delta)(Beta)(sub d) = 0.2 and the inflationary B-mode amplitude by (Delta)(r) = 0.03. Fitting the dust to a modified blackbody at a single temperature biases the best-fit CMB by (Delta)(r) greater than 0.003 if the true dust spectrum contains multiple temperature components. A 13-parameter model fitting two temperature components reduces this bias by an order of magnitude if the true dust spectrum is in fact a simple superposition of emission at different temperatures, but fails at the level (Delta)(r) = 0.006 for dust whose spectral index varies with frequency. Restricting the observing frequencies to a narrow region near the foreground minimum reduces these biases for some dust spectra but can increase the bias for others. Data at THz frequencies surrounding the peak of the dust emission can mitigate these biases while providing a direct determination of the dust temperature profile.
NASA Astrophysics Data System (ADS)
Shamberger, Patrick J.; Garcia, Michael O.
2007-02-01
Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.
Measurement analysis of two radials with a common-origin point and its application.
Liu, Zhenyao; Yang, Jidong; Zhu, Weiwei; Zhou, Shang; Tan, Xuanping
2017-08-01
In spectral analysis, a chemical component is usually identified by its characteristic spectra, especially the peaks. If two components have overlapping spectral peaks, they are generally considered to be indiscriminate in current analytical chemistry textbooks and related literature. However, if the intensities of the overlapping major spectral peaks are additive, and have different rates of change with respect to variations in the concentration of the individual components, a simple method, named the 'common-origin ray', for the simultaneous determination of two components can be established. Several case studies highlighting its applications are presented. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan
2016-02-01
Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system.
SCBUCKLE user's manual: Buckling analysis program for simple supported and clamped panels
NASA Technical Reports Server (NTRS)
Cruz, Juan R.
1993-01-01
The program SCBUCKLE calculates the buckling loads and mode shapes of cylindrically curved, rectangular panels. The panel is assumed to have no imperfections. SCBUCKLE is capable of analyzing specially orthotropic symmetric panels (i.e., A(sub 16) = A(sub 26) = 0.0, D(sub 16) = D(sub 26) = 0.0, B(sub ij) = 0.0). The analysis includes first-order transverse shear theory and is capable of modeling sandwich panels. The analysis supports two types of boundary conditions: either simply supported or clamped on all four edges. The panel can be subjected to linearly varying normal loads N(sub x) and N(sub y) in addition to a constant shear load N(sub xy). The applied loads can be divided into two parts: a preload component; and a variable (eigenvalue-dependent) component. The analysis is based on the modified Donnell's equations for shallow shells. The governing equations are solved by Galerkin's method.
Narrow-field imaging of the lunar sodium exosphere
NASA Technical Reports Server (NTRS)
Stern, S. Alan; Flynn, Brian C.
1995-01-01
We present the first results of a new technique for imaging the lunar Na atmosphere. The technique employs high resolution, a narrow bandpass, and specific observing geometry to suppress scattered light and image lunar atmospheric Na I emission down to approximately 50 km altitude. Analysis of four latitudinally dispersed images shows that the lunar Na atmosphere exhibits intersting latitudinal and radial dependencies. Application of a simple Maxwellian collisionless exosphere model indicates that: (1) at least two thermal populations are required to adequately fit the soldium's radial intensity behavior, and (2) the fractional abundances and temperatures of the two components vary systematically with latitude. We conclude that both cold (barometric) and hot (suprathermal) Na may coexist in the lunar atmosphere, either as distinct components or as elements of a continuum of populations ranging in temperature from the local surface temperature up to or exceeding escape energies.
Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan
2016-02-11
Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system.
Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan
2016-01-01
Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system. PMID:26864099
Formation of a disordered solid via a shock-induced transition in a dense particle suspension
NASA Astrophysics Data System (ADS)
Petel, Oren E.; Frost, David L.; Higgins, Andrew J.; Ouellet, Simon
2012-02-01
Shock wave propagation in multiphase media is typically dominated by the relative compressibility of the two components of the mixture. The difference in the compressibility of the components results in a shock-induced variation in the effective volume fraction of the suspension tending toward the random-close-packing limit for the system, and a disordered solid can take form within the suspension. The present study uses a Hugoniot-based model to demonstrate this variation in the volume fraction of the solid phase as well as a simple hard-sphere model to investigate the formation of disordered structures within uniaxially compressed model suspensions. Both models are discussed in terms of available experimental plate impact data in dense suspensions. Through coordination number statistics of the mesoscopic hard-sphere model, comparisons are made with the trends of the experimental pressure-volume fraction relationship to illustrate the role of these disordered structures in the bulk properties of the suspensions. A criterion for the dynamic stiffening of suspensions under high-rate dynamic loading is suggested as an analog to quasi-static jamming based on the results of the simulations.
Exploratory studies into seasonal flow forecasting potential for large lakes
NASA Astrophysics Data System (ADS)
Sene, Kevin; Tych, Wlodek; Beven, Keith
2018-01-01
In seasonal flow forecasting applications, one factor which can help predictability is a significant hydrological response time between rainfall and flows. On account of storage influences, large lakes therefore provide a useful test case although, due to the spatial scales involved, there are a number of modelling challenges related to data availability and understanding the individual components in the water balance. Here some possible model structures are investigated using a range of stochastic regression and transfer function techniques with additional insights gained from simple analytical approximations. The methods were evaluated using records for two of the largest lakes in the world - Lake Malawi and Lake Victoria - with forecast skill demonstrated several months ahead using water balance models formulated in terms of net inflows. In both cases slight improvements were obtained for lead times up to 4-5 months from including climate indices in the data assimilation component. The paper concludes with a discussion of the relevance of the results to operational flow forecasting systems for other large lakes.
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
A simple, gravimetric method to quantify inorganic carbon in calcareous soils
USDA-ARS?s Scientific Manuscript database
Total carbon (TC) in calcareous soils has two components: inorganic carbon (IC) as calcite and or dolomite and organic carbon (OC) in the soil organic matter. The IC must be measured and subtracted from TC to obtain OC. Our objective was to develop a simple gravimetric technique to quantify IC. Th...
Multi-component sorption of Pb(II), Cu(II) and Zn(II) onto low-cost mineral adsorbent.
Prasad, Murari; Xu, Huan-yan; Saxena, Sona
2008-06-15
Multi-component sorption studies were carried out for attenuation of divalent heavy metal cations (Pb2+, Cu2+ and Zn2+) by a low-cost mineral adsorbent from the aqueous solution. Kinetic and equilibrium batch-type sorption experiments were conducted under variable conditions for multi-component using low-grade (<12%P2O5) phosphate rock. Percentage of multiple heavy metal species removal increases with decreasing initial metals concentration and particle size. The equilibrium data were well described to a lesser extent by Freundlich model but Langmuir model seemed to be more appropriate with the fixation capacity obtained at room temperature for Pb2+, Cu2+ and Zn2+ was 227.2, 769.2 and 666.6 micromol g(-1), respectively. Two simple kinetic models were tested to investigate the adsorption mechanism. Rate constants have been found nearly constant at all metal concentrations for first order. The comparison of adsorption capacity of low-grade phosphate rock decreases in multi-component system as compared to single component due to ionic interactions. X-ray powder diffraction (XRPD) technique was used to ascertain the formation of new metal phases followed by surface complexation. Used adsorbents have been converted into a value added product by utilizing innovative Zero-waste concept to solve the used adsorbents disposal problem and thus protecting the environment.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Cutting the Composite Gordian Knot: Untangling the AGN-Starburst Threads in Single Aperture Spectra
NASA Astrophysics Data System (ADS)
Flury, Sophia; Moran, Edward C.
2018-01-01
Standard emission line diagnostics are able to segregate star-forming galaxies and Seyfert nuclei, and it is often assumed that ambiguous emission-line galaxies falling between these two populations are “composite” objects exhibiting both types of photoionization. We have developed a method that predicts the most probable H II and AGN components that could plausibly explain the “composite” classed objects solely on the basis of their SDSS spectra. The majority of our analysis is driven by empirical relationships revealed by SDSS data rather than theoretical models founded in assumptions. To verify our method, we have compared the predictions of our model with publicly released IFU data from the S7 survey and find that composite objects are not in fact a simple linear combination of the two types of emission. The data reveal a key component in the mixing sequence: geometric dilution of the ionizing radiation which powers the NLR of the active nucleus. When accounting for this effect, our model is successful when applied to several composite-class galaxies. Some objects, however, appear to be at variance with the predicted results, suggesting they may not be powered by black hole accretion.
Dynamic Considerations for Control of Closed Life Support Systems
NASA Technical Reports Server (NTRS)
Babcock, P. S.; Auslander, D. M.; Spear, R. C.
1985-01-01
Reliability of closed life support systems depend on their ability to continue supplying the crew's needs during perturbations and equipment failures. The dynamic considerations interact with the basic static design through the sizing of storages, the specification of excess capacities in processors, and the choice of system initial state. A very simple system flow model was used to examine the possibilities for system failures even when there is sufficient storage to buffer the immediate effects of the perturbation. Two control schemes are shown which have different dynamic consequences in response to component failures.
Symmetry rules for the indirect nuclear spin-spin coupling tensor revisited
NASA Astrophysics Data System (ADS)
Buckingham, A. D.; Pyykkö, P.; Robert, J. B.; Wiesenfeld, L.
The symmetry rules of Buckingham and Love (1970), relating the number of independent components of the indirect spin-spin coupling tensor J to the symmetry of the nuclear sites, are shown to require modification if the two nuclei are exchanged by a symmetry operation. In that case, the anti-symmetric part of J does not transform as a second-rank polar tensor under symmetry operations that interchange the coupled nuclei and may be called an anti-tensor. New rules are derived and illustrated by simple molecular models.
Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design
NASA Technical Reports Server (NTRS)
Li, Wu; Robinson, Jay
2016-01-01
This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.
Schellman, J A
1990-08-31
The properties of a simple model for solvation in mixed solvents are explored in this paper. The model is based on the supposition that solvent replacement is a simple one-for-one substitution reaction at macromolecular sites which are independent of one another. This leads to a new form for the binding polynomial in which all terms are associated with ligand interchange rather than ligand addition. The principal solvent acts as one of the ligands. Thermodynamic analysis then shows that thermodynamic binding (i.e., selective interaction) depends on the properties of K'-1, whereas stoichiometric binding (site occupation) depends on K'. K' is a 'practical' interchange equilibrium constant given by (f3/f1)K, where K is the true equilibrium constant for the interchange of components 3 and 1 on the site and f3 and f4 denote their respective activity coefficients on the mole fraction scale. Values of K' less than unity lead to negative selective interaction. It is selective interaction and not occupation number which determines the thermodynamic effects of solvation. When K' greater than 100 on the mole fraction scale or K' greater than 2 on the molality scale (in water), the differences between stoichiometric binding and selective interaction become less than 1%. The theory of this paper is therefore necessary only for very weak binding constants. When K'-1 is small, large concentrations of the added solvent component are required to produce a thermodynamic effect. Under these circumstances the isotherms for the selective interaction and for the excess (or transfer) free energy are strongly dependent on the behavior of the activity coefficients of both solvent components. Two classes of behavior are described depending on whether the components display positive or negative deviations from Raoult's law. Examples which are discussed are aqueous solutions of urea and guanidinium chloride for positive deviations and of sucrose and glucose for negative deviations. Examination of the few studies which have been reported in the literature shows that most of the qualitative features of the stabilization of proteins by sugars and their destabilization by urea and guanidinium chloride are faithfully represented with the model. This includes maxima in the free energy of stabilization and destabilization, decreased and zero selective interaction at high concentrations, etc. These phenomena had no prior explanation. Deficiencies in the model as a representation of solvation in aqueous solution are discussed in the appendix.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
ERIC Educational Resources Information Center
Bonifacci, Paola; Tobia, Valentina
2017-01-01
The present study evaluated which components within the simple view of reading model better predicted reading comprehension in a sample of bilingual language-minority children exposed to Italian, a highly transparent language, as a second language. The sample included 260 typically developing bilingual children who were attending either the first…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Läsker, Ronald; Van de Ven, Glenn; Ferrarese, Laura, E-mail: laesker@mpia.de
2014-01-01
In an effort to secure, refine, and supplement the relation between central supermassive black hole masses, M {sub •}, and the bulge luminosities of their host galaxies, L {sub bul}, we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured M {sub •}, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope. A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extractedmore » via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sérsic-bulge+exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observed in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings, and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore, we found that some spheroids are not well fit when the ellipticity of the Sérsic model is held fixed. This paper presents the details of the image processing and analysis, while we discuss how model-induced biases and systematics in bulge magnitudes impact the M {sub •}-L {sub bul} relation in a companion paper.« less
NASA Astrophysics Data System (ADS)
Läsker, Ronald; Ferrarese, Laura; van de Ven, Glenn
2014-01-01
In an effort to secure, refine, and supplement the relation between central supermassive black hole masses, M •, and the bulge luminosities of their host galaxies, L bul, we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured M •, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope. A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extracted via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sérsic-bulge+exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observed in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings, and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore, we found that some spheroids are not well fit when the ellipticity of the Sérsic model is held fixed. This paper presents the details of the image processing and analysis, while we discuss how model-induced biases and systematics in bulge magnitudes impact the M •-L bul relation in a companion paper.
Estimation method of finger tapping dynamics using simple magnetic detection system
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo
2010-05-01
We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.
A simple microstructure return model explaining microstructure noise and Epps effects
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2014-01-01
We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).
Estimation method of finger tapping dynamics using simple magnetic detection system.
Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo
2010-05-01
We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.
Modeling polar cap F-region patches using time varying convection
NASA Technical Reports Server (NTRS)
Sojka, J. J.; Bowline, M. D.; Schunk, R. W.; Decker, D. T.; Valladares, C. E.; Sheehan, R.; Anderson, D. N.; Heelis, R. A.
1993-01-01
Creation of polar cap F-region patches are simulated for the first time using two independent physical models of the high latitude ionosphere. The patch formation is achieved by temporally varying the magnetospheric electric field (ionospheric convection) input to the models. The imposed convection variations are comparable to changes in the convection that result from changes in the B(y) IMF component for southward IMF. Solar maximum-winter simulations show that simple changes in the convection pattern lead to significant changes in the polar cap plasma structuring. Specifically, in winter, as enhanced dayside plasma convects into the polar cap to form the classic tongue-of-ionization the convection changes produce density structures that are indistinguishable from the observed patches.
Probing the exchange statistics of one-dimensional anyon models
NASA Astrophysics Data System (ADS)
Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis
2018-05-01
We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.
NASA Astrophysics Data System (ADS)
Russ, Stefanie
2014-08-01
It is shown that a two-component percolation model on a simple cubic lattice can explain an experimentally observed behavior [Savage et al., Sens. Actuators B 79, 17 (2001), 10.1016/S0925-4005(01)00843-7; Sens. Actuators B 72, 239 (2001)., 10.1016/S0925-4005(00)00676-6], namely, that a network built up by a mixture of sintered nanocrystalline semiconducting n and p grains can exhibit selective behavior, i.e., respond with a resistance increase when exposed to a reducing gas A and with a resistance decrease in response to another reducing gas B. To this end, a simple model is developed, where the n and p grains are simulated by overlapping spheres, based on realistic assumptions about the gas reactions on the grain surfaces. The resistance is calculated by random walk simulations with nn, pp, and np bonds between the grains, and the results are found in very good agreement with the experiments. Contrary to former assumptions, the np bonds are crucial to obtain this accordance.
Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria
Hui, Sheng; Silverman, Josh M; Chen, Stephen S; Erickson, David W; Basan, Markus; Wang, Jilong; Hwa, Terence; Williamson, James R
2015-01-01
A central aim of cell biology was to understand the strategy of gene expression in response to the environment. Here, we study gene expression response to metabolic challenges in exponentially growing Escherichia coli using mass spectrometry. Despite enormous complexity in the details of the underlying regulatory network, we find that the proteome partitions into several coarse-grained sectors, with each sector's total mass abundance exhibiting positive or negative linear relations with the growth rate. The growth rate-dependent components of the proteome fractions comprise about half of the proteome by mass, and their mutual dependencies can be characterized by a simple flux model involving only two effective parameters. The success and apparent generality of this model arises from tight coordination between proteome partition and metabolism, suggesting a principle for resource allocation in proteome economy of the cell. This strategy of global gene regulation should serve as a basis for future studies on gene expression and constructing synthetic biological circuits. Coarse graining may be an effective approach to derive predictive phenomenological models for other ‘omics’ studies. PMID:25678603
A comparison of Fick and Maxwell-Stefan diffusion formulations in PEMFC gas diffusion layers
NASA Astrophysics Data System (ADS)
Lindstrom, Michael; Wetton, Brian
2017-01-01
This paper explores the mathematical formulations of Fick and Maxwell-Stefan diffusion in the context of polymer electrolyte membrane fuel cell cathode gas diffusion layers. The simple Fick law with a diagonal diffusion matrix is an approximation of Maxwell-Stefan. Formulations of diffusion combined with mass-averaged Darcy flow are considered for three component gases. For this application, the formulations can be compared computationally in a simple, one dimensional setting. Despite the models' seemingly different structure, it is observed that the predictions of the formulations are very similar on the cathode when air is used as oxidant. The two formulations give quite different results when the Nitrogen in the air oxidant is replaced by helium (this is often done as a diagnostic for fuel cells designs). The two formulations also give quite different results for the anode with a dilute Hydrogen stream. These results give direction to when Maxwell-Stefan diffusion, which is more complicated to implement computationally in many codes, should be used in fuel cell simulations.
Just, Marcel Adam; Wang, Jing; Cherkassky, Vladimir L
2017-08-15
Although it has been possible to identify individual concepts from a concept's brain activation pattern, there have been significant obstacles to identifying a proposition from its fMRI signature. Here we demonstrate the ability to decode individual prototype sentences from readers' brain activation patterns, by using theory-driven regions of interest and semantic properties. It is possible to predict the fMRI brain activation patterns evoked by propositions and words which are entirely new to the model with reliably above-chance rank accuracy. The two core components implemented in the model that reflect the theory were the choice of intermediate semantic features and the brain regions associated with the neurosemantic dimensions. This approach also predicts the neural representation of object nouns across participants, studies, and sentence contexts. Moreover, we find that the neural representation of an agent-verb-object proto-sentence is more accurately characterized by the neural signatures of its components as they occur in a similar context than by the neural signatures of these components as they occur in isolation. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
ROSAT PSPC observations of NGC 7469 and Ark 120
NASA Technical Reports Server (NTRS)
Brandt, W. N.; Fabian, A. C.; Nandra, K.; Tsuruta, S.
1993-01-01
We present spatial, temporal and spectral analyses of ROSAT Position Sensitive Proportional Counter (PSPC) observations of the Seyfert 1 galaxies NGC 7469 and Ark 120. Both of these sources show evidence for excess emission and more complex 0.1- 2.5 keV spectra than are predicted by simple extrapolations of higher energy power laws. We find that the spectrum of NGC 7469 can be explained by models that have secondary power-law, secondary bremsstrahlung, secondary blackbody or emission-line components. We find evidence for 0.1-2.5 keV intensity variability of NGC 7469. The spectrum of Ark 120 is better described by models with secondary continuum components than by models with sharper spectral features. We discuss the agreement between X-ray and ultraviolet observations of these sources and examine the observations in the context of accretion disc reflection models. The inner parts of discs are likely to be reflective below approximately 0.24 keV, and this reflectivity complicates simple models of the soft excess.
Ontology and modeling patterns for state-based behavior representation
NASA Technical Reports Server (NTRS)
Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.;
2015-01-01
This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
Thermodynamics of Yukawa fluids near the one-component-plasma limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrapak, Sergey A.; Aix-Marseille-Université, CNRS, Laboratoire PIIM, UMR 7345, 13397 Marseille Cedex 20; Semenov, Igor L.
Thermodynamics of weakly screened (near the one-component-plasma limit) Yukawa fluids in two and three dimensions is analyzed in detail. It is shown that the thermal component of the excess internal energy of these fluids, when expressed in terms of the properly normalized coupling strength, exhibits the scaling pertinent to the corresponding one-component-plasma limit (the scalings differ considerably between the two- and three-dimensional situations). This provides us with a simple and accurate practical tool to estimate thermodynamic properties of weakly screened Yukawa fluids. Particular attention is paid to the two-dimensional fluids, for which several important thermodynamic quantities are calculated to illustratemore » the application of the approach.« less
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
ERIC Educational Resources Information Center
Brocki, Karin C.; Eninger, Lilianne; Thorell, Lisa B.; Bohlin, Gunilla
2010-01-01
The present study, including children at risk for developing Attention Deficit Hyperactivity Disorder (ADHD), examined the idea that complex executive functions (EFs) build upon more simple ones. This notion was applied in the study of longitudinal interrelations between core EF components--simple and complex inhibition, selective attention, and…
Learning from physics-based earthquake simulators: a minimal approach
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2017-04-01
Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.
NASA Astrophysics Data System (ADS)
Perera, Indika U.; Narendran, Nadarajah; Terentyeva, Valeria
2018-04-01
This study investigated the thermal properties of three-dimensional (3-D) printed components with the potential to be used for thermal management in light-emitting diode (LED) applications. Commercially available filament materials with and without a metal filler were characterized with changes to the print orientation. 3-D printed components with an in-plane orientation had >30 % better effective thermal conductivity compared with components printed with a cross-plane orientation. A finite-element analysis was modeled to understand the effective thermal conductivity changes in the 3-D printed components. A simple thermal resistance model was used to estimate the required effective thermal conductivity of the 3-D printed components to be a viable alternative in LED thermal management applications.
Development of single cell lithium ion battery model using Scilab/Xcos
NASA Astrophysics Data System (ADS)
Arianto, Sigit; Yunaningsih, Rietje Y.; Astuti, Edi Tri; Hafiz, Samsul
2016-02-01
In this research, a lithium battery model, as a component in a simulation environment, was developed and implemented using Scicos/Xcos graphical language programming. Scicos used in this research was actually Xcos that is a variant of Scicos which is embedded in Scilab. The equivalent circuit model used in modeling the battery was Double Polarization (DP) model. DP model consists of one open circuit voltage (VOC), one internal resistance (Ri), and two parallel RC circuits. The parameters of the battery were extracted using Hybrid Power Pulse Characterization (HPPC) testing. In this experiment, the Double Polarization (DP) electrical circuit model was used to describe the lithium battery dynamic behavior. The results of simulation of the model were validated with the experimental results. Using simple error analysis, it was found out that the biggest error was 0.275 Volt. It was occurred mostly at the low end of the state of charge (SOC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
ERIC Educational Resources Information Center
Schenk, Robert E.
Intended for use with college students in introductory macroeconomics or American economic history courses, these two computer simulations of two basic macroeconomic models--a simple Keynesian-type model and a quantity-theory-of-money model--present largely incompatible explanations of the Great Depression. Written in Basic, the simulations are…
Using McStas for modelling complex optics, using simple building bricks
NASA Astrophysics Data System (ADS)
Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim
2011-04-01
The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.
NASA Technical Reports Server (NTRS)
Maekawa, S.; Lin, Y. K.
1977-01-01
The interaction between a turbulent flow and certain types of structures which respond to its excitation is investigated. One-dimensional models were used to develop the basic ideas applied to a second model resembling the fuselage construction of an aircraft. In the two-dimensional case a simple membrane, with a small random variation in the membrane tension, was used. A decaying turbulence was constructed by superposing infinitely many components, each of which is convected as a frozen pattern at a different velocity. Structure-turbulence interaction results are presented in terms of the spectral densities of the structural response and the perturbation Reynolds stress in the fluid at the vicinity of the interface.
NASA Astrophysics Data System (ADS)
Perez, R. J.; Shevalier, M.; Hutcheon, I.
2004-05-01
Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.
Reconfigurable paramagnetic microswimmers: Brownian motion affects non-reciprocal actuation.
Du, Di; Hilou, Elaa; Biswal, Sibani Lisa
2018-05-09
Swimming at low Reynolds number is typically dominated by a large viscous drag, therefore microscale swimmers require non-reciprocal body deformation to generate locomotion. Purcell described a simple mechanical swimmer at the microscale consisting of three rigid components connected together with two hinges. Here we present a simple microswimmer consisting of two rigid paramagnetic particles with different sizes. When placed in an eccentric magnetic field, this simple microswimmer exhibits non-reciprocal body motion and its swimming locomotion can be directed in a controllable manner. Additional components can be added to create a multibody microswimmer, whereby the particles act cooperatively and translate in a given direction. For some multibody swimmers, the stochastic thermal forces fragment the arm, which therefore modifies the swimming strokes and changes the locomotive speed. This work offers insight into directing the motion of active systems with novel time-varying magnetic fields. It also reveals that Brownian motion not only affects the locomotion of reciprocal swimmers that are subject to the Scallop theorem, but also affects that of non-reciprocal swimmers.
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
NASA Astrophysics Data System (ADS)
Thomas, R.; Prentice, I. C. C.; Graven, H. D.
2016-12-01
A simple model for gross primary production (GPP), the P-model, is used to analyse the recent increase in the amplitude of the seasonal cycle of CO2 (ASC) at high northern latitudes. Current terrestrial biosphere models and Earth System Models generally underestimate the observed increase in ASC since 1960. The increased ASC is primarily driven by an increase in net primary productivity (NPP), rather than respiration, so models are likely underestimating increases in NPP. In a recent study of process-based terrestrial biosphere models from the Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP), we showed that the concept of light-use efficiency can be used to separate modelled NPP changes into structural and physiological components (Thomas et al, 2016). The structural component (leaf area) can be tested against observations of greening, while the physiological component (light-use efficiency) is an emergent model property. The analysis suggests that current models are capturing the increases in vegetation greenness, but underestimating the increases in light-use efficiency and NPP. We test this hypothesis using the P-model, which explicitly uses greenness data and includes the effects of rising CO2 and climate change. In the P-model, GPP is calculated using only a few equations, which are based on a strong empirical and theoretical framework, and vegetation is not separated into plant functional types. The model is driven by observed greenness, CO2, temperature and vapour pressure, and modelled photosynthetically active radiation at a monthly time-step. Photosynthetic assimilation is based on two key assumptions: the co-limitation hypothesis (electron transport- and Rubisco-limited photosynthetic rates are equal), and the least-cost hypothesis (optimal ci:ca ratio), and is limited by modelled soil moisture. We present simulated changes in GPP over the satellite period (1982-2011) in the P-model, and assess the associated changes in light-use efficiency and ASC. Our results have implications for the attribution of drivers of ecosystem change and the formulation of prognostic and diagnostic biosphere models. Thomas, R. T. et al. 2016, CO2 and greening observations indicate increasing light-use efficiency in Northern terrestrial ecosystems, Geophys Res Lett, in review.
A class of spherical, truncated, anisotropic models for application to globular clusters
NASA Astrophysics Data System (ADS)
de Vita, Ruggero; Bertin, Giuseppe; Zocchi, Alice
2016-05-01
Recently, a class of non-truncated, radially anisotropic models (the so-called f(ν)-models), originally constructed in the context of violent relaxation and modelling of elliptical galaxies, has been found to possess interesting qualities in relation to observed and simulated globular clusters. In view of new applications to globular clusters, we improve this class of models along two directions. To make them more suitable for the description of small stellar systems hosted by galaxies, we introduce a "tidal" truncation by means of a procedure that guarantees full continuity of the distribution function. The new fT(ν)-models are shown to provide a better fit to the observed photometric and spectroscopic profiles for a sample of 13 globular clusters studied earlier by means of non-truncated models; interestingly, the best-fit models also perform better with respect to the radial-orbit instability. Then, we design a flexible but simple two-component family of truncated models to study the separate issues of mass segregation and multiple populations. We do not aim at a fully realistic description of globular clusters to compete with the description currently obtained by means of dedicated simulations. The goal here is to try to identify the simplest models, that is, those with the smallest number of free parameters, but still have the capacity to provide a reasonable description for clusters that are evidently beyond the reach of one-component models. With this tool, we aim at identifying the key factors that characterize mass segregation or the presence of multiple populations. To reduce the relevant parameter space, we formulate a few physical arguments based on recent observations and simulations. A first application to two well-studied globular clusters is briefly described and discussed.
Ensemble method for dengue prediction.
Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan
2018-01-01
In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.
Ensemble method for dengue prediction
Baugher, Benjamin; Moniz, Linda J.; Bagley, Thomas; Babin, Steven M.; Guven, Erhan
2018-01-01
Background In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Methods Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Principal findings Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. Conclusions The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru. PMID:29298320
Dual differential interferometer for measurements of broadband surface acoustic waves
NASA Technical Reports Server (NTRS)
Turner, T. M.; Claus, R. O.
1981-01-01
A simple duel interferometer which uses two pairs of orthogonally polarized optical beams to measure both the amplitude and direction of propagation of broadband ultrasonic surface waves is described. Each pair of focused laser probe beams is used in a separate wideband differential interferometer to independently detect the component of surface wave motion along one direction on the surface. By combining the two output signals corresponding to both components, the two dimensional surface profile and its variation as a function of time is determined.
Engineering cancer microenvironments for in vitro 3-D tumor models
Asghar, Waseem; El Assal, Rami; Shafiee, Hadi; Pitteri, Sharon; Paulmurugan, Ramasamy; Demirci, Utkan
2017-01-01
The natural microenvironment of tumors is composed of extracellular matrix (ECM), blood vasculature, and supporting stromal cells. The physical characteristics of ECM as well as the cellular components play a vital role in controlling cancer cell proliferation, apoptosis, metabolism, and differentiation. To mimic the tumor microenvironment outside the human body for drug testing, two-dimensional (2-D) and murine tumor models are routinely used. Although these conventional approaches are employed in preclinical studies, they still present challenges. For example, murine tumor models are expensive and difficult to adopt for routine drug screening. On the other hand, 2-D in vitro models are simple to perform, but they do not recapitulate natural tumor microenvironment, because they do not capture important three-dimensional (3-D) cell–cell, cell–matrix signaling pathways, and multi-cellular heterogeneous components of the tumor microenvironment such as stromal and immune cells. The three-dimensional (3-D) in vitro tumor models aim to closely mimic cancer microenvironments and have emerged as an alternative to routinely used methods for drug screening. Herein, we review recent advances in 3-D tumor model generation and highlight directions for future applications in drug testing. PMID:28458612
NASA Astrophysics Data System (ADS)
Vespe, Francesco; Benedetto, Catia
2013-04-01
The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.
Molecular-dynamics simulation of mutual diffusion in nonideal liquid mixtures
NASA Astrophysics Data System (ADS)
Rowley, R. L.; Stoker, J. M.; Giles, N. F.
1991-05-01
The mutual-diffusion coefficients, D 12, of n-hexane, n-heptane, and n-octane in chloroform were modeled using equilibrium molecular-dynamics (MD) simulations of simple Lennard-Jones (LJ) fluids. Pure-component LJ parameters were obtained by comparison of simulations to experimental self-diffusion coefficients. While values of “effective” LJ parameters are not expected to simulate accurately diverse thermophysical properties over a wide range of conditions, it was recently shown that effective parameters obtained from pure self-diffusion coefficients can accurately model mutual diffusion in ideal, liquid mixtures. In this work, similar simulations are used to model diffusion in nonideal mixtures. The same combining rules used in the previous study for the cross-interaction parameters were found to be adequate to represent the composition dependence of D 12. The effect of alkane chain length on D 12 is also correctly predicted by the simulations. A commonly used assumption in empirical correlations of D 12, that its kinetic portion is a simple, compositional average of the intradiffusion coefficients, is inconsistent with the simulation results. In fact, the value of the kinetic portion of D 12 was often outside the range of values bracketed by the two intradiffusion coefficients for the nonideal system modeled here.
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Spectral modelling of multicomponent landscapes in the Sahel
NASA Technical Reports Server (NTRS)
Hanan, N. P.; Prince, S. D.; Hiernaux, P. H. Y.
1991-01-01
Simple additive models are used to examine the infuence of differing soil types on the spatial average spectral reflectance and normalized difference vegetation index (NDVI). The spatial average NDVI is shown to be a function of the brightness (red plus near-infrared reflectances), the NDVI, and the fractional cover of the components. In landscapes where soil and vegetation can be considered the only components, the NDVI-brightness model can be inverted to obtain the NDVI of vegetation. The red and near-infrared component reflectances of soil and vegetation are determined on the basis of aerial photoradiometer data from Mali. The relationship between the vegetation component NDVI and plant cover is found to be better than between the NDVI of the entire landscape and plant cover. It is concluded that the usefulness of this modeling approach depends on the existence of clearly distinguishable landscape components.
Statistics of Shared Components in Complex Component Systems
NASA Astrophysics Data System (ADS)
Mazzolini, Andrea; Gherardi, Marco; Caselle, Michele; Cosentino Lagomarsino, Marco; Osella, Matteo
2018-04-01
Many complex systems are modular. Such systems can be represented as "component systems," i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf's law. Such "laws" affect in a general and nontrivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from large-scale data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the "core" genome in bacteria.
User assessment of smoke-dispersion models for wildland biomass burning.
Steve Breyfogle; Sue A. Ferguson
1996-01-01
Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, Vikram; Harrison, Fiona A.; Walton, Dominic J.
We present results for two ultraluminous X-ray sources (ULXs), IC 342 X-1 and IC 342 X-2, using two epochs of XMM-Newton and NuSTAR observations separated by ∼7 days. We observe little spectral or flux variability above 1 keV between epochs, with unabsorbed 0.3-30 keV luminosities being 1.04{sub −0.06}{sup +0.08}×10{sup 40} erg s{sup –1} for IC 342 X-1 and 7.40 ± 0.20 × 10{sup 39} erg s{sup –1} for IC 342 X-2, so that both were observed in a similar, luminous state. Both sources have a high absorbing column in excess of the Galactic value. Neither source has a spectrum consistent with a black hole binary in low/hard state, and both ULXsmore » exhibit strong curvature in their broadband X-ray spectra. This curvature rules out models that invoke a simple reflection-dominated spectrum with a broadened iron line and no cutoff in the illuminating power-law continuum. X-ray spectrum of IC 342 X-1 can be characterized by a soft disk-like blackbody component at low energies and a cool, optically thick Comptonization continuum at high energies, but unique physical interpretation of the spectral components remains challenging. The broadband spectrum of IC 342 X-2 can be fit by either a hot (3.8 keV) accretion disk or a Comptonized continuum with no indication of a seed photon population. Although the seed photon component may be masked by soft excess emission unlikely to be associated with the binary system, combined with the high absorption column, it is more plausible that the broadband X-ray emission arises from a simple thin blackbody disk component. Secure identification of the origin of the spectral components in these sources will likely require broadband spectral variability studies.« less
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
Raymer, James; Abel, Guy J.; Rogers, Andrei
2012-01-01
Population projection models that introduce uncertainty are a growing subset of projection models in general. In this paper, we focus on the importance of decisions made with regard to the model specifications adopted. We compare the forecasts and prediction intervals associated with four simple regional population projection models: an overall growth rate model, a component model with net migration, a component model with in-migration and out-migration rates, and a multiregional model with destination-specific out-migration rates. Vector autoregressive models are used to forecast future rates of growth, birth, death, net migration, in-migration and out-migration, and destination-specific out-migration for the North, Midlands and South regions in England. They are also used to forecast different international migration measures. The base data represent a time series of annual data provided by the Office for National Statistics from 1976 to 2008. The results illustrate how both the forecasted subpopulation totals and the corresponding prediction intervals differ for the multiregional model in comparison to other simpler models, as well as for different assumptions about international migration. The paper ends end with a discussion of our results and possible directions for future research. PMID:23236221
Limiting factors in atomic resolution cryo electron microscopy: No simple tricks
Zhang, Xing; Zhou, Z. Hong
2013-01-01
To bring cryo electron microscopy (cryoEM) of large biological complexes to atomic resolution, several factors – in both cryoEM image acquisition and 3D reconstruction – that may be neglected at low resolution become significantly limiting. Here we present thorough analyses of four limiting factors: (a) electron-beam tilt, (b) inaccurate determination of defocus values, (c) focus gradient through particles, and (d) particularly for large particles, dynamic (multiple) scattering of electrons. We also propose strategies to cope with these factors: (a) the divergence and direction tilt components of electron-beam tilt could be reduced by maintaining parallel illumination and by using a coma-free alignment procedure, respectively. Moreover, the effect of all beam tilt components, including spiral tilt, could be eliminated by use of a spherical aberration corrector. (b) More accurate measurement of defocus value could be obtained by imaging areas adjacent to the target area at high electron dose and by measuring the image shift induced by tilting the electron beam. (c) Each known Fourier coefficient in the Fourier transform of a cryoEM image is the sum of two Fourier coefficients of the 3D structure, one on each of two curved ‘characteristic surfaces’ in 3D Fourier space. We describe a simple model-based iterative method that could recover these two Fourier coefficients on the two characteristic surfaces. (d) The effect of dynamic scattering could be corrected by deconvolution of a transfer function. These analyses and our proposed strategies offer useful guidance for future experimental designs targeting atomic resolution cryoEM reconstruction. PMID:21627992
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
RL10A-3-3A Rocket Engine Modeling Project
NASA Technical Reports Server (NTRS)
Binder, Michael; Tomsik, Thomas; Veres, Joseph P.
1997-01-01
Two RL10A-3-3A rocket engines comprise the main propulsion system for the Centaur upper stage vehicle. Centaur is used with bod Titan and Atlas launch vehicles, carrying military and civilian payloads from high altitudes into orbit and beyond. The RL10 has delivered highly reliable service for the past 30 years. Recently, however, there have been two in-flight failures which have refocused attention on the RL10. This heightened interest has sparked a desire for an independent RL10 modeling capability within NASA and th Air Force. Pratt & Whitney, which presently has the most detailed model of the RL10, also sees merit in having an independent model which could be used as a cross-check with their own simulations. The Space Propulsion Technology Division (SPTD) at the NASA Lewis Research Center has developed a computer model of the RL10A-3-3A. A project team was formed, consisting of experts in the areas of turbomachinery, combustion, and heat transfer. The overall goal of the project was to provide a model of the entire RL10 rocket engine for government use. In the course of the project, the major engine components have been modeled using a combination of simple correlations and detailed component analysis tools (computer codes). The results of these component analyses were verified with data provided by Pratt & Whitney. Select modeling results and test data curves were then integrated to form the RL10 engine system model The purpose of this report is to introduce the reader to the RL10 rocket engine and to describe the engine system model. The RL10 engine and its application to U.S. launch vehicles are described first, followed by a summary of the SPTD project organization, goals, and accomplishments. Simulated output from the system model are shown in comparison with test and flight data for start transient, steady state, and shut-down transient operations. Detailed descriptions of all component analyses, including those not selected for integration with the system model, are included as appendices.
NASA Technical Reports Server (NTRS)
Mclaughlin, W. I.; Lundy, S. A.; Ling, H. Y.; Stroberg, M. W.
1980-01-01
The coverage of the celestial sphere or the surface of the earth with a narrow-field instrument onboard a satellite can be described by a set of swaths on the sphere. A transect is a curve on this sphere constructed to sample the coverage. At each point on the transect the number of times that the field-of-view of the instrument has passed over the point is recorded. This information is conveniently displayed as an integer-valued histogram over the length of the transect. The effectiveness of the transect method for a particular observing plan and the best placement of the transects depends upon the structure of the set of observations. Survey missions are usually characterized by a somewhat parallel alignment of the instrument swaths. Using autocorrelation and cross-correlation functions among the histograms the structure of a survey has been analyzed into two components, and each is illustrated by a simple mathematical model. The complex, all-sky survey to be performed by the Infrared Astronomical Satellite (IRAS) is synthesized in some detail utilizing the objectives and constraints of that mission. It is seen that this survey possesses the components predicted by the simple models and this information is useful in characterizing the properties of the IRAS survey and the placement of the transects as a function of celestial latitude and certain structural properties of the coverage.
A curious case of the accretion-powered X-ray pulsar GX 1+4
NASA Astrophysics Data System (ADS)
Jaisawal, Gaurava K.; Naik, Sachindra; Gupta, Shivangi; Chenevez, Jérôme; Epili, Prahlad
2018-04-01
We present detailed spectral and timing studies using a NuSTAR observation of GX 1+4 in October 2015 during an intermediate intensity state. The measured spin period of 176.778 s is found to be one of the highest values since its discovery. In contrast to a broad sinusoidal-like pulse profile, a peculiar sharp peak is observed in profiles below ˜25 keV. The profiles at higher energies are found to be significantly phase-shifted compared to the soft X-ray profiles. Broadband energy spectra of GX 1+4, obtained from NuSTAR and Swift observations, are described with various continuum models. Among these, a two component model consisting of a bremsstrahlung and a blackbody component is found to best-fit the phase-averaged and phase-resolved spectra. Physical models are also used to investigate the emission mechanism in the pulsar, which allows us to estimate the magnetic field strength to be in ˜(5-10)× 1012 G range. Phase-resolved spectroscopy of NuSTAR observation shows a strong blackbody emission component in a narrow pulse phase range. This component is interpreted as the origin of the peculiar peak in the pulse profiles below ≤25 keV. The size of emitting region is calculated to be ˜400 m. The bremsstrahlung component is found to dominate in hard X-rays and explains the nature of simple profiles at high energies.
A curious case of the accretion-powered X-ray pulsar GX 1+4
NASA Astrophysics Data System (ADS)
Jaisawal, Gaurava K.; Naik, Sachindra; Gupta, Shivangi; Chenevez, Jérôme; Epili, Prahlad
2018-07-01
We present detailed spectral and timing studies using a NuSTAR observation of GX 1+4 in 2015 October during an intermediate-intensity state. The measured spin period of 176.778 s is found to be one of the highest values since its discovery. In contrast to a broad sinusoidal-like pulse profile, a peculiar sharp peak is observed in profiles below ˜25 keV. The profiles at higher energies are found to be significantly phase shifted compared to the soft X-ray profiles. Broad-band energy spectra of GX 1+4, obtained from NuSTAR and Swift observations, are described with various continuum models. Among these, a two-component model consisting of a bremsstrahlung and a blackbody component is found to best fit the phase-averaged and phase-resolved spectra. Physical models are also used to investigate the emission mechanism in the pulsar, which allows us to estimate the magnetic field strength to be in ˜(5-10) × 1012 G range. Phase-resolved spectroscopy of NuSTAR observation shows a strong blackbody emission component in a narrow pulse phase range. This component is interpreted as the origin of the peculiar peak in the pulse profiles below ≤25 keV. The size of emitting region is calculated to be ˜400 m. The bremsstrahlung component is found to dominate in hard X-rays and explains the nature of simple profiles at high energies.
Test Driven Development of a Parameterized Ice Sheet Component
NASA Astrophysics Data System (ADS)
Clune, T.
2011-12-01
Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.
Problem-Solving Test: Submitochondrial Localization of Proteins
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
Mitochondria are surrounded by two membranes (outer and inner mitochondrial membrane) that separate two mitochondrial compartments (intermembrane space and matrix). Hundreds of proteins are distributed among these submitochondrial components. A simple biochemical/immunological procedure is described in this test to determine the localization of…
Monte Carlo simulation of two-component bilayers: DMPC/DSPC mixtures.
Sugár, I P; Thompson, T E; Biltonen, R L
1999-01-01
In this paper, we describe a relatively simple lattice model of a two-component, two-state phospholipid bilayer. Application of Monte Carlo methods to this model permits simulation of the observed excess heat capacity versus temperature curves of dimyristoylphosphatidylcholine (DMPC)/distearoylphosphatidylcholine (DSPC) mixtures as well as the lateral distributions of the components and properties related to these distributions. The analysis of the bilayer energy distribution functions reveals that the gel-fluid transition is a continuous transition for DMPC, DSPC, and all DMPC/DSPC mixtures. A comparison of the thermodynamic properties of DMPC/DSPC mixtures with the configurational properties shows that the temperatures characteristics of the configurational properties correlate well with the maxima in the excess heat capacity curves rather than with the onset and completion temperatures of the gel-fluid transition. In the gel-fluid coexistence region, we also found excellent agreement between the threshold temperatures at different system compositions detected in fluorescence recovery after photobleaching experiments and the temperatures at which the percolation probability of the gel clusters is 0.36. At every composition, the calculated mole fraction of gel state molecules at the fluorescence recovery after photobleaching threshold is 0.34 and, at the percolation threshold of gel clusters, it is 0.24. The percolation threshold mole fraction of gel or fluid lipid depends on the packing geometry of the molecules and the interchain interactions. However, it is independent of temperature, system composition, and state of the percolating cluster. PMID:10096905
NASA Astrophysics Data System (ADS)
Davis, Joshua R.; Giorgis, Scott
2014-11-01
We describe a three-part approach for modeling shape preferred orientation (SPO) data of spheroidal clasts. The first part consists of criteria to determine whether a given SPO and clast shape are compatible. The second part is an algorithm for randomly generating spheroid populations that match a prescribed SPO and clast shape. In the third part, numerical optimization software is used to infer deformation from spheroid populations, by finding the deformation that returns a set of post-deformation spheroids to a minimally anisotropic initial configuration. Two numerical experiments explore the strengths and weaknesses of this approach, while giving information about the sensitivity of the model to noise in data. In monoclinic transpression of oblate rigid spheroids, the model is found to constrain the shortening component but not the simple shear component. This modeling approach is applied to previously published SPO data from the western Idaho shear zone, a monoclinic transpressional zone that deformed a feldspar megacrystic gneiss. Results suggest at most 5 km of shortening, as well as pre-deformation SPO fabric. The shortening estimate is corroborated by a second model that assumes no pre-deformation fabric.
Acoustic Shielding for a Model Scale Counter-rotation Open Rotor
NASA Technical Reports Server (NTRS)
Stephens, David B.; Edmane, Envia
2012-01-01
The noise shielding benefit of installing an open rotor above a simplified wing or tail is explored experimentally. The test results provide both a benchmark data set for validating shielding prediction tools and an opportunity for a system level evaluation of the noise reduction potential of propulsion noise shielding by an airframe component. A short barrier near the open rotor was found to provide up to 8.5 dB of attenuation at some directivity angles, with tonal sound particularly well shielded. Predictions from two simple shielding theories were found to overestimate the shielding benefit.
NASA Astrophysics Data System (ADS)
Stewart, Michael K.; Morgenstern, Uwe; Gusyev, Maksym A.; Małoszewski, Piotr
2017-09-01
Kirchner (2016a) demonstrated that aggregation errors due to spatial heterogeneity, represented by two homogeneous subcatchments, could cause severe underestimation of the mean transit times (MTTs) of water travelling through catchments when simple lumped parameter models were applied to interpret seasonal tracer cycle data. Here we examine the effects of such errors on the MTTs and young water fractions estimated using tritium concentrations in two-part hydrological systems. We find that MTTs derived from tritium concentrations in streamflow are just as susceptible to aggregation bias as those from seasonal tracer cycles. Likewise, groundwater wells or springs fed by two or more water sources with different MTTs will also have aggregation bias. However, the transit times over which the biases are manifested are different because the two methods are applicable over different time ranges, up to 5 years for seasonal tracer cycles and up to 200 years for tritium concentrations. Our virtual experiments with two water components show that the aggregation errors are larger when the MTT differences between the components are larger and the amounts of the components are each close to 50 % of the mixture. We also find that young water fractions derived from tritium (based on a young water threshold of 18 years) are almost immune to aggregation errors as were those derived from seasonal tracer cycles with a threshold of about 2 months.
Snowmelt-runoff Model Utilizing Remotely-sensed Data
NASA Technical Reports Server (NTRS)
Rango, A.
1985-01-01
Remotely sensed snow cover information is the critical data input for the Snowmelt-Runoff Model (SRM), which was developed to simulatke discharge from mountain basins where snowmelt is an important component of runoff. Of simple structure, the model requires only input of temperature, precipitation, and snow covered area. SRM was run successfully on two widely separated basins. The simulations on the Kings River basin are significant because of the large basin area (4000 sq km) and the adequate performance in the most extreme drought year of record (1976). The performance of SRM on the Okutadami River basin was important because it was accomplished with minimum snow cover data available. Tables show: optimum and minimum conditions for model application; basin sizes and elevations where SRM was applied; and SRM strengths and weaknesses. Graphs show results of discharge simulation.
Wearden, J H; Lejeune, Helga
2006-02-28
The article deals with response rates (mainly running and peak or terminal rates) on simple and on some mixed-FI schedules and explores the idea that these rates are determined by the average delay of reinforcement for responses occurring during the response periods that the schedules generate. The effects of reinforcement delay are assumed to be mediated by a hyperbolic delay of reinforcement gradient. The account predicts that (a) running rates on simple FI schedules should increase with increasing rate of reinforcement, in a manner close to that required by Herrnstein's equation, (b) improving temporal control during acquisition should be associated with increasing running rates, (c) two-valued mixed-FI schedules with equiprobable components should produce complex results, with peak rates sometimes being higher on the longer component schedule, and (d) that effects of reinforcement probability on mixed-FI should affect the response rate at the time of the shorter component only. All these predictions were confirmed by data, although effects in some experiments remain outside the scope of the model. In general, delay of reinforcement as a determinant of response rate on FI and related schedules (rather than temporal control on such schedules) seems a useful starting point for a more thorough analysis of some neglected questions about performance on FI and related schedules.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
Models of classical and recurrent novae
NASA Technical Reports Server (NTRS)
Friedjung, Michael; Duerbeck, Hilmar W.
1993-01-01
The behavior of novae may be divided roughly into two separate stages: quiescence and outburst. However, at closer inspection, both stages cannot be separated. It should be attempted to explain features in both stages with a similar model. Various simple models to explain the observed light and spectral observations during post optical maximum activity are conceivable. In instantaneous ejection models, all or nearly all material is ejected in a time that is short compared with the duration of post optical maximum activity. Instantaneous ejection type 1 models are those where the ejected material is in a fairly thin shell, the thickness of which remains small. In the instantaneous ejection type 2 model ('Hubble Flow'), a thick envelope is ejected instantaneously. This envelope remains thick as different parts have different velocities. Continued ejection models emphasize the importance of winds from the nova after optical maximum. Ejection is supposed to occur from one of the components of the central binary, and one can imagine a general swelling of one of the components, so that something resembling a normal, almost stationary, stellar photosphere is observed after optical maximum. The observed characteristics of recurrent novae in general are rather different from those of classical novae, thus, models for these stars need not be the same.
[CI] and CO in local galaxies from the Beyond the Peak Project
NASA Astrophysics Data System (ADS)
Crocker, Alison F.; Pellegrini, E. W.; Smith, J. T.; Beyond The Peak Team
2014-01-01
From simple plane-parallel photodissociation region (PDR) models, neutral carbon ([CI]) is predicted to exist in a thin layer between C+ and CO on the surface of molecular clouds (e.g. (Hollenbach & Tielens 1999, Kaufman et al. 1999). However, observations of the Milky Way and the Magellanic Clouds indicate that [CI] may instead be a better tracer of the entire cold gas reservoir, often very well correlated with emission from 12CO(1-0) or 13CO(1-0) (e.g. Keene et al. 1996, Bolatto et al. 2000, Shimajiri et al. 2013). Here, we present the observed [CI] fluxes from the Beyond the Peak sample of 22 nearby galaxies observed with the Herschel FTS spectrometer. We first attempt to model all CO transitions and the [CI] lines as a single plane-parallel PDR, but this fails in all cases. Instead, a two-component PDR model is able to fit the CO SLED of nearly all galaxies. We investigate correlations of the [CI] fluxes and line ratio with properties of the cooler component determined from the PDR fits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crull, E W; Brown Jr., C G; Perkins, M P
2008-07-30
For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less
Hidden patterns of reciprocity.
Syi
2014-03-21
Reciprocity can help the evolution of cooperation. To model both types of reciprocity, we need the concept of strategy. In the case of direct reciprocity there are four second-order action rules (Simple Tit-for-tat, Contrite Tit-for-tat, Pavlov, and Grim Trigger), which are able to promote cooperation. In the case of indirect reciprocity the key component of cooperation is the assessment rule. There are, again, four elementary second-order assessment rules (Image Scoring, Simple Standing, Stern Judging, and Shunning). The eight concepts can be formalized in an ontologically thin way we need only an action predicate and a value function, two agent concepts, and the constant of goodness. The formalism helps us to discover that the action and assessment rules can be paired, and that they show the same patterns. The logic of these patterns can be interpreted with the concept of punishment that has an inherent paradoxical nature. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Goteti, G.; Kaheil, Y. H.; Katz, B. G.; Li, S.; Lohmann, D.
2011-12-01
In the United States, government agencies as well as the National Flood Insurance Program (NFIP) use flood inundation maps associated with the 100-year return period (base flood elevation, BFE), produced by the Federal Emergency Management Agency (FEMA), as the basis for flood insurance. A credibility check of the flood risk hydraulic models, often employed by insurance companies, is their ability to reasonably reproduce FEMA's BFE maps. We present results from the implementation of a flood modeling methodology aimed towards reproducing FEMA's BFE maps at a very fine spatial resolution using a computationally parsimonious, yet robust, hydraulic model. The hydraulic model used in this study has two components: one for simulating flooding of the river channel and adjacent floodplain, and the other for simulating flooding in the remainder of the catchment. The first component is based on a 1-D wave propagation model, while the second component is based on a 2-D diffusive wave model. The 1-D component captures the flooding from large-scale river transport (including upstream effects), while the 2-D component captures the flooding from local rainfall. The study domain consists of the contiguous United States, hydrologically subdivided into catchments averaging about 500 km2 in area, at a spatial resolution of 30 meters. Using historical daily precipitation data from the Climate Prediction Center (CPC), the precipitation associated with the 100-year return period event was computed for each catchment and was input to the hydraulic model. Flood extent from the FEMA BFE maps is reasonably replicated by the 1-D component of the model (riverine flooding). FEMA's BFE maps only represent the riverine flooding component and are unavailable for many regions of the USA. However, this modeling methodology (1-D and 2-D components together) covers the entire contiguous USA. This study is part of a larger modeling effort from Risk Management Solutions° (RMS) to estimate flood risk associated with extreme precipitation events in the USA. Towards this greater objective, state-of-the-art models of flood hazard and stochastic precipitation are being implemented over the contiguous United States. Results from the successful implementation of the modeling methodology will be presented.
ERIC Educational Resources Information Center
van der Linden, Wim J.
Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…
NASA Technical Reports Server (NTRS)
Conel, James E.; Vandenbosch, Jeannette; Grove, Cindy I.
1993-01-01
We used the Kubelka-Munk theory of diffuse spectral reflectance in layers to analyze influences of multiple chemical components in leaves. As opposed to empirical approaches to estimation of plant chemistry, the full spectral resolution of laboratory reflectance data was retained in an attempt to estimate lignin or other constituent concentrations from spectral band positions. A leaf water reflectance spectrum was derived from theoretical mixing rules, reflectance observations, and calculations from theory of intrinsic k- and s-functions. Residual reflectance bands were then isolated from spectra of fresh green leaves. These proved hard to interpret for composition in terms of simple two component mixtures such as lignin and cellulose. We next investigated spectral and dilution influences of other possible components (starch, protein). These components, among others, added to cellulose in hypothetical mixtures, produce band displacements similar to lignin, but will disguise by dilution the actual abundance of lignin present in a multicomponent system. This renders interpretation of band positions problematical. Knowledge of end-members and their spectra, and a more elaborate mixture analysis procedure may be called for. Good observational atmospheric and instrumental conditions and knowledge thereof are required for retrieval of expected subtle reflectance variations present in spectra of green vegetation.
Description of the US Army small-scale 2-meter rotor test system
NASA Technical Reports Server (NTRS)
Phelps, Arthur E., III; Berry, John D.
1987-01-01
A small-scale powered rotor model was designed for use as a research tool in the exploratory testing of rotors and helicopter models. The model, which consists of a 29 hp rotor drive system, a four-blade fully articulated rotor, and a fuselage, was designed to be simple to operate and maintain in wind tunnels of moderate size and complexity. Two six-component strain-gauge balances are used to provide independent measurement of the rotor and fuselage aerodynamic loads. Commercially available standardized hardware and equipment were used to the maximum extent possible, and specialized parts were designed so that they could be fabricated by normal methods without using highly specialized tooling. The model was used in a hover test of three rotors having different planforms and in a forward flight investigation of a 21-percent-scale model of a U.S. Army scout helicopter equipped with a mast-mounted sight.
Systems analysis techniques for annual cycle thermal energy storage solar systems
NASA Astrophysics Data System (ADS)
Baylin, F.
1980-07-01
Community-scale annual cycle thermal energy storage solar systems are options for building heat and cooling. A variety of approaches are feasible in modeling ACTES solar systems. The key parameter in such efforts, average collector efficiency, is examined, followed by several approaches for simple and effective modeling. Methods are also examined for modeling building loads for structures based on both conventional and passive architectural designs. Two simulation models for sizing solar heating systems with annual storage are presented. Validation is presented by comparison with the results of a study of seasonal storage systems based on SOLANSIM, an hour-by-hour simulation. These models are presently used to examine the economic trade-off between collector field area and storage capacity. Programs directed toward developing other system components such as improved tanks and solar ponds or design tools for ACTES solar systems are examined.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Mahanama, Sarith P.
2012-01-01
The inherent soil moisture-evaporation relationships used in today 's land surface models (LSMs) arguably reflect a lot of guesswork given the lack of contemporaneous evaporation and soil moisture observations at the spatial scales represented by regional and global models. The inherent soil moisture-runoff relationships used in the LSMs are also of uncertain accuracy. Evaluating these relationships is difficult but crucial given that they have a major impact on how the land component contributes to hydrological and meteorological variability within the climate system. The relationships, it turns out, can be examined efficiently and effectively with a simple water balance model framework. The simple water balance model, driven with multi-decadal observations covering the conterminous United States, shows how different prescribed relationships lead to different manifestations of hydrological variability, some of which can be compared directly to observations. Through the testing of a wide suite of relationships, the simple model provides estimates for the underlying relationships that operate in nature and that should be operating in LSMs. We examine the relationships currently used in a number of different LSMs in the context of the simple water balance model results and make recommendations for potential first-order improvements to these LSMs.
Butterfly valve in a virtual environment
NASA Astrophysics Data System (ADS)
Talekar, Aniruddha; Patil, Saurabh; Thakre, Prashant; Rajkumar, E.
2017-11-01
Assembly of components is one of the processes involved in product design and development. The present paper deals with the assembly of a simple butterfly valve components in a virtual environment. The assembly has been carried out using virtual reality software by trial and error methods. The parts are modelled using parametric software (SolidWorks), meshed accordingly, and then called into virtual environment for assembly.
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
A simple model for strong ground motions and response spectra
Safak, Erdal; Mueller, Charles; Boatwright, John
1988-01-01
A simple model for the description of strong ground motions is introduced. The model shows that response spectra can be estimated by using only four parameters of the ground motion, the RMS acceleration, effective duration and two corner frequencies that characterize the effective frequency band of the motion. The model is windowed band-limited white noise, and is developed by studying the properties of two functions, cumulative squared acceleration in the time domain, and cumulative squared amplitude spectrum in the frequency domain. Applying the methods of random vibration theory, the model leads to a simple analytical expression for the response spectra. The accuracy of the model is checked by using the ground motion recordings from the aftershock sequences of two different earthquakes and simulated accelerograms. The results show that the model gives a satisfactory estimate of the response spectra.
Learning in Structured Connectionist Networks
1988-04-01
the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive
Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models
NASA Astrophysics Data System (ADS)
Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.
2017-02-01
Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]-[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]-[Fe/H] unlike the observed bimodality (separate high-α and low-α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]-[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α-elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.
Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph
Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracksmore » in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.« less
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
CLUSTERnGO: a user-defined modelling platform for two-stage clustering of time-series data.
Fidaner, Işık Barış; Cankorur-Cetinkaya, Ayca; Dikicioglu, Duygu; Kirdar, Betul; Cemgil, Ali Taylan; Oliver, Stephen G
2016-02-01
Simple bioinformatic tools are frequently used to analyse time-series datasets regardless of their ability to deal with transient phenomena, limiting the meaningful information that may be extracted from them. This situation requires the development and exploitation of tailor-made, easy-to-use and flexible tools designed specifically for the analysis of time-series datasets. We present a novel statistical application called CLUSTERnGO, which uses a model-based clustering algorithm that fulfils this need. This algorithm involves two components of operation. Component 1 constructs a Bayesian non-parametric model (Infinite Mixture of Piecewise Linear Sequences) and Component 2, which applies a novel clustering methodology (Two-Stage Clustering). The software can also assign biological meaning to the identified clusters using an appropriate ontology. It applies multiple hypothesis testing to report the significance of these enrichments. The algorithm has a four-phase pipeline. The application can be executed using either command-line tools or a user-friendly Graphical User Interface. The latter has been developed to address the needs of both specialist and non-specialist users. We use three diverse test cases to demonstrate the flexibility of the proposed strategy. In all cases, CLUSTERnGO not only outperformed existing algorithms in assigning unique GO term enrichments to the identified clusters, but also revealed novel insights regarding the biological systems examined, which were not uncovered in the original publications. The C++ and QT source codes, the GUI applications for Windows, OS X and Linux operating systems and user manual are freely available for download under the GNU GPL v3 license at http://www.cmpe.boun.edu.tr/content/CnG. sgo24@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Medlyn, B.; Jiang, M.; Zaehle, S.
2017-12-01
There is now ample experimental evidence that the response of terrestrial vegetation to rising atmospheric CO2 concentration is modified by soil nutrient availability. How to represent nutrient cycling processes is thus a key consideration for vegetation models. We have previously used model intercomparison to demonstrate that models incorporating different assumptions predict very different responses at Free-Air CO2 Enrichment experiments. Careful examination of model outputs has provided some insight into the reasons for the different model outcomes, but it is difficult to attribute outcomes to specific assumptions. Here we investigate the impact of individual assumptions in a generic plant carbon-nutrient cycling model. The G'DAY (Generic Decomposition And Yield) model is modified to incorporate alternative hypotheses for nutrient cycling. We analyse the impact of these assumptions in the model using a simple analytical approach known as "two-timing". This analysis identifies the quasi-equilibrium behaviour of the model at the time scales of the component pools. The analysis provides a useful mathematical framework for probing model behaviour and identifying the most critical assumptions for experimental study.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
Fluid coupling in a discrete model of cochlear mechanics.
Elliott, Stephen J; Lineton, Ben; Ni, Guangjian
2011-09-01
A discrete model of cochlear mechanics is introduced that includes a full, three-dimensional, description of fluid coupling. This formulation allows the fluid coupling and basilar membrane dynamics to be analyzed separately and then coupled together with a simple piece of linear algebra. The fluid coupling is initially analyzed using a wavenumber formulation and is separated into one component due to one-dimensional fluid coupling and one comprising all the other contributions. Using the theory of acoustic waves in a duct, however, these two components of the pressure can also be associated with a far field, due to the plane wave, and a near field, due to the evanescent, higher order, modes. The near field components are then seen as one of a number of sources of additional longitudinal coupling in the cochlea. The effects of non-uniformity and asymmetry in the fluid chamber areas can also be taken into account, to predict both the pressure difference between the chambers and the mean pressure. This allows the calculation, for example, of the effect of a short cochlear implant on the coupled response of the cochlea. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Mosher, Richard A.; Thormann, Wolfgang; Graham, Aly; Bier, Milan
1985-01-01
Two methods which utilize simple buffers for the generation of stable pH gradients (useful for preparative isoelectric focusing) are compared and contrasted. The first employs preformed gradients comprised of two simple buffers in density-stabilized free solution. The second method utilizes neutral membranes to isolate electrolyte reservoirs of constant composition from the separation column. It is shown by computer simulation that steady-state gradients can be formed at any pH range with any number of components in such a system.
Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell
2014-05-01
The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.
Diffusion in silicate melts: III. Empirical models for multicomponent diffusion
NASA Astrophysics Data System (ADS)
Yan, Liang; Richter, Frank M.; Chamberlin, Laurinda
1997-12-01
Empirical models for multicomponent diffusion in an isotropic fluid were derived by splitting the component's dispersion velocity into two parts: (a) an intrinsic velocity which is proportional to each component's electrochemical potential gradient and independent of reference frame and (b) a net interaction velocity which is both model and reference frame dependent. Simple molecules (e.g., M pO q) were chosen as endmember components. The interaction velocity is assumed to be either the same for each component (leading to a common relaxation velocity U) or proportional to a common interaction force ( F). U or F is constrained by requiring no local buildup in either volume or charge. The most general form of the model-derived diffusion matrix [ D] can be written as a product of a model-dependent kinetic matrix [ L] and a model independent thermodynamic matrix [ G], [ D] = [ L] · [ G]. The elements of [ G] are functions of derivatives of chemical potential with respect to concentration. The elements of [ L] are functions of concentration and partial molar volume of the endmember components, Cio and Vio, and self diffusivity Di, and charge number zi of individual diffusing species. When component n is taken as the dependent variable they can be written in a common form L ij = D jδ ij + C io[V noD n - V joD j)A i + (p nz nD n - p jz jD j)B i] where the functional forms of the scaling factors Ai and Bi depend on the model considered. The off-diagonal element Lij ( i ≠ j) is directly proportional to the concentration of component i, and thus negligible when i is a dilute component. The salient feature of kinetic interaction or relaxation is to slow down larger (volume or charge) and faster diffusing components and to speed up smaller (volume or charge) and slower moving species, in order to prevent local volume or charge buildup. Empirical models for multicomponent diffusion were tested in the ternary system CaOAl 2O 3SiO 2 at 1500°C and 1 GPa over a large range of melt compositions. Model-derived diffusion matrices calculated using measured self diffusivities (Ca, Al, Si, and O), partial molar volumes, and activities were compared with experimentally derived diffusion matrices at two melt compositions. Chemical diffusion profiles computed using the model-derived diffusion matrices, accounting for the compositional dependency of self diffusivities and activity coefficients, were also compared with the experimentally measured ones. Good agreement was found between the ionic common-force model derived diffusion profiles and the experimentally measured ones. Secondary misfits could result from either inadequacies of the model or inaccuracies in activity-composition relationship. The results show that both kinetic interactions and thermodynamic nonideality contribute significantly to the observed diffusive coupling in the molten CaOAl 2O 3SiO 2.
Evans, C.; Davies, T.D.; Murdoch, Peter S.
1999-01-01
Plots of solute concentration against discharge have been used to relate stream hydrochemical variations to processes of flow generation, using data collected at four streams in the Catskill Mountains, New York, during the Episodic Response Project of the US Environmental Protection Agency. Results suggest that a two-component system of shallow and deep saturated subsurface flow, in which the two components respond simultaneously during hydrologic events, may be applicable to the study basins. Using a large natural sea-salt sodium input as a tracer for precipitation, it is argued that an additional distinction can be made between pre-event and event water travelling along the shallow subsurface flow path. Pre-event water is thought to be displaced by infiltrating event water, which becomes dominant on the falling limb of the hydrograph. Where, as appears to be the case for sulfate, a solute equilibrates rapidly within the soil, the pre-event-event water distinction is unimportant. However, for some solutes there are clear and consistent compositional differences between water from the two sources, evident as a hysteresis loop in concentration-discharge plots. Nitrate and acidity, in particular, appear to be elevated in event water following percolation through the organic horizon. Consequently, the most acidic, high nitrate conditions during an episode generally occur after peak discharge. A simple conceptual model of episode runoff generation is presented on the basis of these results.Plots of solute concentration against discharge have been used to relate stream hydrochemical variations to processes of flow generation, using data collected at four streams in the Catskill Mountains, New York, during the Episodic Response Project of the US Environmental Protection Agency. Results suggest that a two-component system of shallow and deep saturated subsurface flow, in which the two components respond simultaneously during hydrologic events, may be applicable to the study basins. Using a large natural sea-salt sodium input as a tracer for precipitation, it is argued that an additional distinction can be made between pre-event and event water travelling along the shallow subsurface flow path. Pre-event water is thought to be displaced by infiltrating event water, which becomes dominant on the falling limb of the hydrograph. Where, as appears to be the case for sulfate, a solute equilibrates rapidly within the soil, the pre-event - event water distinction is unimportant. However, for some solutes there are clear and consistent compositional differences between water from the two sources, evident as a hysteresis loop in concentration-discharge plots. Nitrate and acidity, in particular, appear to be elevated in event water following percolation through the organic horizon. Consequently, the most acidic, high nitrate conditions during an episode generally occur after peak discharge. A simple conceptual model of episode runoff generation is presented on the basis of these results.
NASA Technical Reports Server (NTRS)
Franklin, Janet; Simonett, David
1988-01-01
The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.
Partially acoustic dark matter, interacting dark radiation, and large scale structure
NASA Astrophysics Data System (ADS)
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz
2016-12-01
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.
Partially acoustic dark matter, interacting dark radiation, and large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less
Partially acoustic dark matter, interacting dark radiation, and large scale structure
Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; ...
2016-12-21
The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less
Mapping surrogate gasoline compositions into RON/MON space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Neal; Kraft, Markus; Smallbone, Andrew
2010-06-15
In this paper, new experimentally determined octane numbers (RON and MON) of blends of a tri-component surrogate consisting of toluene, n-heptane, i-octane (called toluene reference fuel TRF) arranged in an augmented simplex design are used to derive a simple response surface model for the octane number of any arbitrary TRF mixture. The model is second-order in its complexity and is shown to be more accurate to the standard ''linear-by-volume'' (LbV) model which is often used when no other information is available. Such observations are due to the existence of both synergistic and antagonistic blending of the octane numbers between themore » three components. In particular, antagonistic blending of toluene and iso-octane leads to a maximum in sensitivity that lies on the toluene/iso-octane line. The model equations are inverted so as to map from RON/MON space back into composition space. Enabling one to use two simple formulae to determine, for a given fuel with known RON and MON, the volume fractions of toluene, n-heptane and iso-octane to be blended in order to emulate that fuel. HCCI engine simulations using gasoline with a RON of 98.5 and a MON of 88 were simulated using a TRF fuel, blended according to the derived equations to match the RON and MON. The simulations matched the experimentally obtained pressure profiles well, especially when compared to simulations using only PRF fuels which matched the RON or MON. This suggested that the mapping is accurate and that to emulate a refinery gasoline, it is necessary to match not only the RON but also the MON of the fuel. (author)« less
Webber, Whitney M.; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species’ demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key’s utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions. PMID:27478713
Malcom, Jacob W; Webber, Whitney M; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species' demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key's utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A; Arnold, Steven M; Pineda, Evan J
2016-05-04
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e. , each individual grain. Two-three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Simplified aeroelastic modeling of horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Wendell, J. H.
1982-01-01
Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.
Park, Byung-Jung; Lord, Dominique; Wu, Lingtao
2016-10-28
This study aimed to investigate the relative performance of two models (negative binomial (NB) model and two-component finite mixture of negative binomial models (FMNB-2)) in terms of developing crash modification factors (CMFs). Crash data on rural multilane divided highways in California and Texas were modeled with the two models, and crash modification functions (CMFunctions) were derived. The resultant CMFunction estimated from the FMNB-2 model showed several good properties over that from the NB model. First, the safety effect of a covariate was better reflected by the CMFunction developed using the FMNB-2 model, since the model takes into account the differential responsiveness of crash frequency to the covariate. Second, the CMFunction derived from the FMNB-2 model is able to capture nonlinear relationships between covariate and safety. Finally, following the same concept as those for NB models, the combined CMFs of multiple treatments were estimated using the FMNB-2 model. The results indicated that they are not the simple multiplicative of single ones (i.e., their safety effects are not independent under FMNB-2 models). Adjustment Factors (AFs) were then developed. It is revealed that current Highway Safety Manual's method could over- or under-estimate the combined CMFs under particular combination of covariates. Safety analysts are encouraged to consider using the FMNB-2 models for developing CMFs and AFs. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Selinger, Ben
1979-01-01
Water is a major component in many consumer products. Azeotropic distillation of products such as detergents and foodstuffs to form a two-phase distillate is a simple experimental method to determine the percentage of water in the product. (Author/GA)
Modeling of hybrid vehicle fuel economy and fuel engine efficiency
NASA Astrophysics Data System (ADS)
Wu, Wei
"Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.
Elastic and viscoelastic calculations of stresses in sedimentary basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less
Experimental Study of Cement - Sandstone/Shale - Brine - CO2 Interactions
2011-01-01
Background Reactive-transport simulation is a tool that is being used to estimate long-term trapping of CO2, and wellbore and cap rock integrity for geologic CO2 storage. We reacted end member components of a heterolithic sandstone and shale unit that forms the upper section of the In Salah Gas Project carbon storage reservoir in Krechba, Algeria with supercritical CO2, brine, and with/without cement at reservoir conditions to develop experimentally constrained geochemical models for use in reactive transport simulations. Results We observe marked changes in solution composition when CO2 reacted with cement, sandstone, and shale components at reservoir conditions. The geochemical model for the reaction of sandstone and shale with CO2 and brine is a simple one in which albite, chlorite, illite and carbonate minerals partially dissolve and boehmite, smectite, and amorphous silica precipitate. The geochemical model for the wellbore environment is also fairly simple, in which alkaline cements and rock react with CO2-rich brines to form an Fe containing calcite, amorphous silica, smectite and boehmite or amorphous Al(OH)3. Conclusions Our research shows that relatively simple geochemical models can describe the dominant reactions that are likely to occur when CO2 is stored in deep saline aquifers sealed with overlying shale cap rocks, as well as the dominant reactions for cement carbonation at the wellbore interface. PMID:22078161
Pharmacokinetic Modeling of JP-8 Jet Fuel Components: II. A Conceptual Framework
2003-12-01
example, a single type of (simple) binary interaction between 300 components would require the specification of some 105 interaction coefficients . One...individual substances, via binary mechanisms, is enough to predict the interactions present in the mixture. Secondly, complex mixtures can often be...approximated as pseudo- binary systems, consisting of the compound of interest plus a single interacting complex vehicle with well-defined, composite
Second- and third-harmonic generation in metal-based structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scalora, M.; Akozbek, N.; Bloemer, M. J.
We present a theoretical approach to the study of second- and third-harmonic generation from metallic structures and nanocavities filled with a nonlinear material in the ultrashort pulse regime. We model the metal as a two-component medium, using the hydrodynamic model to describe free electrons and Lorentz oscillators to account for core electron contributions to both the linear dielectric constant and harmonic generation. The active nonlinear medium that may fill a metallic nanocavity, or be positioned between metallic layers in a stack, is also modeled using Lorentz oscillators and surface phenomena due to symmetry breaking are taken into account. We studymore » the effects of incident TE- and TM-polarized fields and show that a simple reexamination of the basic equations reveals additional, exploitable dynamical features of nonlinear frequency conversion in plasmonic nanostructures.« less
Signell, Richard; Camossi, E.
2016-01-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Signell, Richard P.; Camossi, Elena
2016-05-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
Observation Data Model Core Components, its Implementation in the Table Access Protocol Version 1.1
NASA Astrophysics Data System (ADS)
Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel; Michel, Laurent; Bonnarel, Francos; Micol, Alberto; IVOA DataModel Working Group; Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel
2017-05-01
This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for astronomical observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM). ObsCore shares a large set of common concepts with DataSet Metadata Data Model (Cresitello-Dittmar et al. 2016) which binds together most of the data model concepts from the above models in a comprehensive and more general frame work. This current specification on the contrary provides guidelines for implementing these concepts using the TAP protocol and answering ADQL queries. It is dedicated to global discovery.
Coupled two-dimensional edge plasma and neutral gas modeling of tokamak scrape-off-layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maingi, Rajesh
1992-08-01
The objective of this study is to devise a detailed description of the tokamak scrape-off-layer (SOL), which includes the best available models of both the plasma and neutral species and the strong coupling between the two in many SOL regimes. A good estimate of both particle flux and heat flux profiles at the limiter/divertor target plates is desired. Peak heat flux is one of the limiting factors in determining the survival probability of plasma-facing-components at high power levels. Plate particle flux affects the neutral flux to the pump, which determines the particle exhaust rate. A technique which couples a two-dimensionalmore » (2-D) plasma and a 2-D neutral transport code has been developed (coupled code technique), but this procedure requires large amounts of computer time. Relevant physics has been added to an existing two-neutral-species model which takes the SOL plasma/neutral coupling into account in a simple manner (molecular physics model), and this model is compared with the coupled code technique mentioned above. The molecular physics model is benchmarked against experimental data from a divertor tokamak (DIII-D), and a similar model (single-species model) is benchmarked against data from a pump-limiter tokamak (Tore Supra). The models are then used to examine two key issues: free-streaming-limits (ion energy conduction and momentum flux) and the effects of the non-orthogonal geometry of magnetic flux surfaces and target plates on edge plasma parameter profiles.« less
NASA Technical Reports Server (NTRS)
Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.
2009-01-01
The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
A modeling study of marine boundary layer clouds
NASA Technical Reports Server (NTRS)
Wang, Shouping; Fitzjarrald, Daniel E.
1993-01-01
Marine boundary layer (MBL) clouds are important components of the earth's climate system. These clouds drastically reduce the amount of solar radiation absorbed by the earth, but have little effect on the emitted infrared radiation on top of the atmosphere. In addition, these clouds are intimately involved in regulating boundary layer turbulent fluxes. For these reasons, it is important that general circulation models used for climate studies must realistically simulate the global distribution of the MBL. While the importance of these cloud systems is well recognized, many physical processes involved in these clouds are poorly understood and their representation in large-scale models remains an unresolved problem. The present research aims at the development and improvement of the parameterization of these cloud systems and an understanding of physical processes involved. This goal is addressed in two ways. One is to use regional modeling approach to validate and evaluate two-layer marine boundary layer models using satellite and ground-truth observations; the other is to combine this simple model with a high-order turbulence closure model to study the transition processes from stratocumulus to shallow cumulus clouds. Progress made in this effort is presented.
A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS
We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...
Rubber friction and tire dynamics.
Persson, B N J
2011-01-12
We propose a simple rubber friction law, which can be used, for example, in models of tire (and vehicle) dynamics. The friction law is tested by comparing numerical results to the full rubber friction theory (Persson 2006 J. Phys.: Condens. Matter 18 7789). Good agreement is found between the two theories. We describe a two-dimensional (2D) tire model which combines the rubber friction model with a simple mass-spring description of the tire body. The tire model is very flexible and can be used to accurately calculate μ-slip curves (and the self-aligning torque) for braking and cornering or combined motion (e.g. braking during cornering). We present numerical results which illustrate the theory. Simulations of anti-blocking system (ABS) braking are performed using two simple control algorithms.
Dynamics and forecast in a simple model of sustainable development for rural populations.
Angulo, David; Angulo, Fabiola; Olivar, Gerard
2015-02-01
Society is becoming more conscious on the need to preserve the environment. Sustainable development schemes have grown rapidly as a tool for managing, predicting and improving the growth path in different regions and economy sectors. We introduce a novel and simple mathematical model of ordinary differential equations (ODEs) in order to obtain a dynamical description for each one of the sustainability components (economy, social development and environment conservation), together with their dependence with demographic dynamics. The main part in the modeling task is inspired by the works by Cobb, Douglas, Brander and Taylor. This is completed through some new insights by the authors. A model application is presented for three specific geographical rural regions in Caldas (Colombia).
An extended source for CN jets in Comet P/Halley
NASA Technical Reports Server (NTRS)
Klavetter, James Jay; A'Hearn, Michael F.
1994-01-01
We examined radial intensity profiles of CN jets in comparison with the diffuse, isotropic component of the CN coma of Comet P/Halley. All images were bias-subtracted, flat-fielded, and continuum-subtracted. We calculated the diffuse profiles by finding the azimuthal mean of the coma least contaminated by jets yielding profiles similar to those of vectorial and Haser models of simple photodissociation. We found the jet profiles by calculating a mean around a Gaussian-fitted center in r-theta space. There is an unmistakable difference between the profiles of the CN jets and the profiles of the diffuse CN. Spatial derivatives of these profiles, corrected for geometrical expansion, show that the diffuse component is consistent with a simple photodissociation process, but the jet component is not. The peak production of the jet profile occurs 6000 km from the nucleus at a heliocentric distance of 1.4 AU. Modeling of both components of the coma indicate results that are consistent with the diffuse CN photochemically produced, but the CN jets need an additional extended source. We found that about one-half of the CN in the coma of Comet P/Halley originated from the jets, the rest from the diffuse component. These features, along with the width of the jet being approximately constant, are consistent with a CHON grain origin for the jets.
On the Forward Scattering of Microwave Breast Imaging
Lui, Hoi-Shun; Fhager, Andreas; Persson, Mikael
2012-01-01
Microwave imaging for breast cancer detection has been of significant interest for the last two decades. Recent studies focus on solving the imaging problem using an inverse scattering approach. Efforts have mainly been focused on the development of the inverse scattering algorithms, experimental setup, antenna design and clinical trials. However, the success of microwave breast imaging also heavily relies on the quality of the forward data such that the tumor inside the breast volume is well illuminated. In this work, a numerical study of the forward scattering data is conducted. The scattering behavior of simple breast models under different polarization states and aspect angles of illumination are considered. Numerical results have demonstrated that better data contrast could be obtained when the breast volume is illuminated using cross-polarized components in linear polarization basis or the copolarized components in the circular polarization basis. PMID:22611371
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
Development of an Open Rotor Cycle Model in NPSS Using a Multi-Design Point Approach
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.
2011-01-01
NASA's Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft (Refs. 1 and 2). The open rotor concept (also referred to as the Unducted Fan or advanced turboprop) may allow the achievement of this objective by reducing engine emissions and fuel consumption. To evaluate its potential impact, an open rotor cycle modeling capability is needed. This paper presents the initial development of an open rotor cycle model in the Numerical Propulsion System Simulation (NPSS) computer program which can then be used to evaluate the potential benefit of this engine. The development of this open rotor model necessitated addressing two modeling needs within NPSS. First, a method for evaluating the performance of counter-rotating propellers was needed. Therefore, a new counter-rotating propeller NPSS component was created. This component uses propeller performance maps developed from historic counter-rotating propeller experiments to determine the thrust delivered and power required. Second, several methods for modeling a counter-rotating power turbine within NPSS were explored. These techniques used several combinations of turbine components within NPSS to provide the necessary power to the propellers. Ultimately, a single turbine component with a conventional turbine map was selected. Using these modeling enhancements, an open rotor cycle model was developed in NPSS using a multi-design point approach. The multi-design point (MDP) approach improves the engine cycle analysis process by making it easier to properly size the engine to meet a variety of thrust targets throughout the flight envelope. A number of design points are considered including an aerodynamic design point, sea-level static, takeoff and top of climb. The development of this MDP model was also enabled by the selection of a simple power management scheme which schedules propeller blade angles with the freestream Mach number. Finally, sample open rotor performance results and areas for further model improvements are presented.
Gutreuter, S.; Boogaard, M.A.
2007-01-01
Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
Method of frequency dependent correlations: investigating the variability of total solar irradiance
NASA Astrophysics Data System (ADS)
Pelt, J.; Käpylä, M. J.; Olspert, N.
2017-04-01
Context. This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites. Aims: We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain. Methods: Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves. Results: We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum. A significant part of the variability can be explained by a very simple model consisting of two components: the original proxy describing blanketing by sunspots, and the low-pass-filtered curve describing the overall activity level. The models with the full library of the different building blocks can be applied to hindcasting with a high level of confidence, Rc ≈ 0.90. The usefulness of these models is limited by the major target controversy. Conclusions: The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.
The contribution of a central pattern generator in a reflex-based neuromuscular model
Dzeladini, Florin; van den Kieboom, Jesse; Ijspeert, Auke
2014-01-01
Although the concept of central pattern generators (CPGs) controlling locomotion in vertebrates is widely accepted, the presence of specialized CPGs in human locomotion is still a matter of debate. An interesting numerical model developed in the 90s’ demonstrated the important role CPGs could play in human locomotion, both in terms of stability against perturbations, and in terms of speed control. Recently, a reflex-based neuro-musculo-skeletal model has been proposed, showing a level of stability to perturbations similar to the previous model, without any CPG components. Although exhibiting striking similarities with human gaits, the lack of CPG makes the control of speed/step length in the model difficult. In this paper, we hypothesize that a CPG component will offer a meaningful way of controlling the locomotion speed. After introducing the CPG component in the reflex model, and taking advantage of the resulting properties, a simple model for gait modulation is presented. The results highlight the advantages of a CPG as feedforward component in terms of gait modulation. PMID:25018712
A simple sensing mechanism for wireless, passive pressure sensors.
Drazan, John F; Wassick, Michael T; Dahle, Reena; Beardslee, Luke A; Cady, Nathaniel C; Ledet, Eric H
2016-08-01
We have developed a simple wireless pressure sensor that consists of only three electrically isolated components. Two conductive spirals are separated by a closed cell foam that deforms when exposed to changing pressures. This deformation changes the capacitance and thus the resonant frequency of the sensors. Prototype sensors were submerged and wirelessly interrogated while being exposed to physiologically relevant pressures from 10 to 130 mmHg. Sensors consistently exhibited a sensitivity of 4.35 kHz/mmHg which is sufficient for resolving physiologically relevant pressure changes in vivo. These simple sensors have the potential for in vivo pressure sensing.
Simple refractometer based on in-line fiber interferometers
NASA Astrophysics Data System (ADS)
Esteban, Ó.; Martínez Manuel, R.; Shlyagin, M. G.
2015-09-01
A very simple but accurate optical fiber refractometer based on the Fresnel reflection in the fiber tip and two in-line low-reflective mirrors for light intensity referencing is reported. Each mirror was generated by connecting together 2 fiber sections with FC/PC and FC/APC connectors using the standard FC/PC mating sleeve. For the sensor interrogation, a standard DFB diode laser pumped with a sawtooth-wave current was used. A resolution of 6 x 10-4 was experimentally demonstrated using different liquids. A simple sensor construction and the use of low cost components make the reported system interesting for many applications.
Santos, Andrés; Manzano, Gema
2010-04-14
As is well known, approximate integral equations for liquids, such as the hypernetted chain (HNC) and Percus-Yevick (PY) theories, are in general thermodynamically inconsistent in the sense that the macroscopic properties obtained from the spatial correlation functions depend on the route followed. In particular, the values of the fourth virial coefficient B(4) predicted by the HNC and PY approximations via the virial route differ from those obtained via the compressibility route. Despite this, it is shown in this paper that the value of B(4) obtained from the virial route in the HNC theory is exactly three halves the value obtained from the compressibility route in the PY theory, irrespective of the interaction potential (whether isotropic or not), the number of components, and the dimensionality of the system. This simple relationship is confirmed in one-component systems by analytical results for the one-dimensional penetrable-square-well model and the three-dimensional penetrable-sphere model, as well as by numerical results for the one-dimensional Lennard-Jones model, the one-dimensional Gaussian core model, and the three-dimensional square-well model.
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
Canopy reflectance modeling in a tropical wooded grassland
NASA Technical Reports Server (NTRS)
Simonett, David
1988-01-01
The Li-Strahler canopy reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density in two bioclimatic zones in Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple patameters measured in the field and in the imagery. Reflectance properties of the trees were measured in the study sites using a pole-mounted radiometer. The measurements showed that the assumptions of the simple Li-Strahler model are reasonable for these woodlands. The field radiometer measurements were used to calculate the normalized difference vegetation index (NDVI), and the integrated NDVI over the canopy was related to crown volume. Predictions of tree size and density from the canopy model were used with allometric equations from the literature to estimate woody biomass and potential foliar biomass for the sites and for the regions. Estimates were compared with independent measurements made in the Sahelian sites, and to typical values from the literature for these regions and for similar woodlands. In order to apply the inversion procedure regionally, an area must first be stratified into woodland cover classes, and dry-season TM data were used to generate a stratum map of the study areas with reasonable accuracy. The method used was unsupervised classification of multi-data principal components images.
Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests
An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...
NASA Astrophysics Data System (ADS)
Yilmaz, Zeynep
Typically, the vertical component of the ground motion is not considered explicitly in seismic design of bridges, but in some cases the vertical component can have a significant effect on the structural response. The key question of when the vertical component should be incorporated in design is answered by the probabilistic seismic hazard assessment study incorporating the probabilistic seismic demand models and ground motion models. Nonlinear simulation models with varying configurations of an existing bridge in California were considered in the analytical study. The simulation models were subjected to the set of selected ground motions in two stages: at first, only horizontal components of the motion were applied; while in the second stage the structures were subjected to both horizontal and vertical components applied simultaneously and the ground motions that produced the largest adverse effects on the bridge system were identified. Moment demand in the mid-span and at the support of the longitudinal girder and the axial force demand in the column are found to be significantly affected by the vertical excitations. These response parameters can be modeled using simple ground motion parameters such as horizontal spectral acceleration and vertical spectral acceleration within 5% to 30% error margin depending on the type of the parameter and the period of the structure. For a complete hazard assessment, both of these ground motion parameters explaining the structural behavior should also be modeled. For the horizontal spectral acceleration, Abrahamson and Silva (2008) model was used within many available standard model. A new NGA vertical ground motion model consistent with the horizontal model was constructed. These models are combined in a vector probabilistic seismic hazard analyses. Series of hazard curves developed and presented for different locations in Bay Area for soil site conditions to provide a roadmap for the prediction of these features for future earthquakes. Findings from this study will contribute to the development of revised guidelines to address vertical ground motion effects, particularly in the near fault regions, in the seismic design of highway bridges.
Davis Rabosky, Alison R; Cox, Christian L; Rabosky, Daniel L
2016-04-01
Identifying the genetic basis of mimetic signals is critical to understanding both the origin and dynamics of mimicry over time. For species not amenable to large laboratory breeding studies, widespread color polymorphism across natural populations offers a powerful way to assess the relative likelihood of different genetic systems given observed phenotypic frequencies. We classified color phenotype for 2175 ground snakes (Sonora semiannulata) across the continental United States to analyze morph ratios and test among competing hypotheses about the genetic architecture underlying red and black coloration in coral snake mimics. We found strong support for a two-locus model under simple Mendelian inheritance, with red and black pigmentation being controlled by separate loci. We found no evidence of either linkage disequilibrium between loci or sex linkage. In contrast to Batesian mimicry systems such as butterflies in which all color signal components are linked into a single "supergene," our results suggest that the mimetic signal in colubrid snakes can be disrupted through simple recombination and that color evolution is likely to involve discrete gains and losses of each signal component. Both outcomes are likely to contribute to the exponential increase in rates of color evolution seen in snake mimicry systems over insect systems. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Can Neuroscience Help Us Do a Better Job of Teaching Music?
ERIC Educational Resources Information Center
Hodges, Donald A.
2010-01-01
We are just at the beginning stages of applying neuroscientific findings to music teaching. A simple model of the learning cycle based on neuroscience is Sense [right arrow] Integrate [right arrow] Act (sometimes modified as Act [right arrow] Sense [right arrow] Integrate). Additional components can be added to the model, including such concepts…
2004-09-01
MESH VS . SIMPLE AD HOC AND MANET..............................................5 B. DESIRABLE CHARACTERISTICS OF WIRELESS MESH NETWORKS...Comparison of Mesh (top) vs . Traditional Wireless (bottom) .............26 Figure 7. UML Model of SensorML Components (From SenorML Models Paper) ......30...50 Figure 17. Latency Difference Example – OLSR vs . AODV
Lee, Sonmin; Hur, Jin
2016-04-01
Heterogeneous adsorption behavior of landfill leachate on granular activated carbon (GAC) was investigated by fluorescence excitation-emission matrix (EEM) combined with parallel factor analysis (PARAFAC). The equilibrium adsorption of two leachates on GAC was well described by simple Langmuir and Freundlich isotherm models. More nonlinear isotherm and a slower adsorption rate were found for the leachate with the higher values of specific UV absorbance and humification index, suggesting that the leachate containing more aromatic content and condensed structures might have less accessible sites of GAC surface and a lower degree of diffusive adsorption. Such differences in the adsorption behavior were found even within the bulk leachate as revealed by the dissimilarity in the isotherm and kinetic model parameters between two identified PARAFAC components. For both leachates, terrestrial humic-like fluorescence (C1) component, which is likely associated with relatively large sized and condensed aromatic structures, exhibited a higher isotherm nonlinearity and a slower kinetic rate for GAC adsorption than microbial humic-like (C2) component. Our results were consistent with size exclusion effects, a well-known GAC adsorption mechanism. This study demonstrated the promising benefit of using EEM-PARAFAC for GAC adsorption processes of landfill leachate through fast monitoring of the influent and treated leachate, which can provide valuable information on optimizing treatment processes and predicting further environmental impacts of the treated effluent. Copyright © 2016 Elsevier Ltd. All rights reserved.
Determination of the transmission coefficients for quantum structures using FDTD method.
Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan
2011-12-01
The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.
The Poisson-Helmholtz-Boltzmann model.
Bohinc, K; Shrestha, A; May, S
2011-10-01
We present a mean-field model of a one-component electrolyte solution where the mobile ions interact not only via Coulomb interactions but also through a repulsive non-electrostatic Yukawa potential. Our choice of the Yukawa potential represents a simple model for solvent-mediated interactions between ions. We employ a local formulation of the mean-field free energy through the use of two auxiliary potentials, an electrostatic and a non-electrostatic potential. Functional minimization of the mean-field free energy leads to two coupled local differential equations, the Poisson-Boltzmann equation and the Helmholtz-Boltzmann equation. Their boundary conditions account for the sources of both the electrostatic and non-electrostatic interactions on the surface of all macroions that reside in the solution. We analyze a specific example, two like-charged planar surfaces with their mobile counterions forming the electrolyte solution. For this system we calculate the pressure between the two surfaces, and we analyze its dependence on the strength of the Yukawa potential and on the non-electrostatic interactions of the mobile ions with the planar macroion surfaces. In addition, we demonstrate that our mean-field model is consistent with the contact theorem, and we outline its generalization to arbitrary interaction potentials through the use of a Laplace transformation. © EDP Sciences / Società Italiana di Fisica / Springer-Verlag 2011
A Simple Double-Source Model for Interference of Capillaries
ERIC Educational Resources Information Center
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…
NASA Astrophysics Data System (ADS)
Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias
2010-12-01
We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.
Approximations to galaxy star formation rate histories: properties and uses of two examples
NASA Astrophysics Data System (ADS)
Cohn, J. D.
2018-05-01
Galaxies evolve via a complex interaction of numerous different physical processes, scales and components. In spite of this, overall trends often appear. Simplified models for galaxy histories can be used to search for and capture such emergent trends, and thus to interpret and compare results of galaxy formation models to each other and to nature. Here, two approximations are applied to galaxy integrated star formation rate histories, drawn from a semi-analytic model grafted onto a dark matter simulation. Both a lognormal functional form and principal component analysis (PCA) approximate the integrated star formation rate histories fairly well. Machine learning, based upon simplified galaxy halo histories, is somewhat successful at recovering both fits. The fits to the histories give fixed time star formation rates which have notable scatter from their true final time rates, especially for quiescent and "green valley" galaxies, and more so for the PCA fit. For classifying galaxies into subfamilies sharing similar integrated histories, both approximations are better than using final stellar mass or specific star formation rate. Several subsamples from the simulation illustrate how these simple parameterizations provide points of contact for comparisons between different galaxy formation samples, or more generally, models. As a side result, the halo masses of simulated galaxies with early peak star formation rate (according to the lognormal fit) are bimodal. The galaxies with a lower halo mass at peak star formation rate appear to stall in their halo growth, even though they are central in their host halos.
Parallel Execution of Functional Mock-up Units in Buildings Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan
2016-06-30
A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less
Joint Bayesian Component Separation and CMB Power Spectrum Estimation
NASA Technical Reports Server (NTRS)
Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.
2008-01-01
We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
NASA Astrophysics Data System (ADS)
Hegazy, Maha Abdel Monem; Fayez, Yasmin Mohammed
2015-04-01
Two different methods manipulating spectrophotometric data have been developed, validated and compared. One is capable of removing the signal of any interfering components at the selected wavelength of the component of interest (univariate). The other includes more variables and extracts maximum information to determine the component of interest in the presence of other components (multivariate). The applied methods are smart, simple, accurate, sensitive, precise and capable of determination of spectrally overlapped antihypertensives; hydrochlorothiazide (HCT), irbesartan (IRB) and candesartan (CAN). Mean centering of ratio spectra (MCR) and concentration residual augmented classical least-squares method (CRACLS) were developed and their efficiency was compared. CRACLS is a simple method that is capable of extracting the pure spectral profiles of each component in a mixture. Correlation was calculated between the estimated and pure spectra and was found to be 0.9998, 0.9987 and 0.9992 for HCT, IRB and CAN, respectively. The methods were successfully determined the three components in bulk powder, laboratory-prepared mixtures, and combined dosage forms. The results obtained were compared statistically with each other and to those of the official methods.
Martins, Kelly Vasconcelos Chaves; Gil, Daniela
2017-01-01
Introduction The registry of the component P1 of the cortical auditory evoked potential has been widely used to analyze the behavior of auditory pathways in response to cochlear implant stimulation. Objective To determine the influence of aural rehabilitation in the parameters of latency and amplitude of the P1 cortical auditory evoked potential component elicited by simple auditory stimuli (tone burst) and complex stimuli (speech) in children with cochlear implants. Method The study included six individuals of both genders aged 5 to 10 years old who have been cochlear implant users for at least 12 months, and who attended auditory rehabilitation with an aural rehabilitation therapy approach. Participants were submitted to research of the cortical auditory evoked potential at the beginning of the study and after 3 months of aural rehabilitation. To elicit the responses, simple stimuli (tone burst) and complex stimuli (speech) were used and presented in free field at 70 dB HL. The results were statistically analyzed, and both evaluations were compared. Results There was no significant difference between the type of eliciting stimulus of the cortical auditory evoked potential for the latency and the amplitude of P1. There was a statistically significant difference in the P1 latency between the evaluations for both stimuli, with reduction of the latency in the second evaluation after 3 months of auditory rehabilitation. There was no statistically significant difference regarding the amplitude of P1 under the two types of stimuli or in the two evaluations. Conclusion A decrease in latency of the P1 component elicited by both simple and complex stimuli was observed within a three-month interval in children with cochlear implant undergoing aural rehabilitation. PMID:29018498
Polarization of Narrowband VLF Transmitter Signals as an Ionospheric Diagnostic
NASA Astrophysics Data System (ADS)
Gross, N. C.; Cohen, M. B.; Said, R. K.; Gołkowski, M.
2018-01-01
Very low frequency (VLF, 3-30 kHz) transmitter remote sensing has long been used as a simple yet useful diagnostic for the D region ionosphere (60-90 km). All it requires is a VLF radio receiver that records the amplitude and/or phase of a beacon signal as a function of time. During both ambient and disturbed conditions, the received signal can be compared to predictions from a theoretical model to infer ionospheric waveguide properties like electron density. Amplitude and phase have in most cases been analyzed each as individual data streams, often only the amplitude is used. Scattered field formulation combines amplitude and phase effectively, but does not address how to combine two magnetic field components. We present polarization ellipse analysis of VLF transmitter signals using two horizontal components of the magnetic field. The shape of the polarization ellipse is unchanged as the source phase varies, which circumvents a significant problem where VLF transmitters have an unknown source phase. A synchronized two-channel MSK demodulation algorithm is introduced to mitigate 90° ambiguity in the phase difference between the horizontal magnetic field components. Additionally, the synchronized demodulation improves phase measurements during low-SNR conditions. Using the polarization ellipse formulation, we take a new look at diurnal VLF transmitter variations, ambient conditions, and ionospheric disturbances from solar flares, lightning-ionospheric heating, and lightning-induced electron precipitation, and find differing signatures in the polarization ellipse.
A coupled human-water system from a systems dynamics perspective
NASA Astrophysics Data System (ADS)
Kuil, Linda; Blöschl, Günter; Carr, Gemma
2013-04-01
Traditionally, models used in hydrological studies have frequently assumed stationarity. Moreover, human-induced water resources management activities are often included as external forcings in water cycle dynamics. However, considering humans' current impact on the water cycle in terms of a growing population, river basins increasingly being managed and a climate considerably changing, it has recently been questioned whether this is still correct. Furthermore, research directed at the evolution of water resources and society has shown that the components constituting the human-water system are changing interdependently. Goal of this study is therefore to approach water cycle dynamics from an integrated perspective in which humans are considered as endogenous forces to the system. The method used to model a coupled, urban human-water system is system dynamics. In system dynamics, particular emphasis is placed on feedback loops resulting in dynamic behavior. Time delays and non-linearity can relatively easily be included, making the method appropriate for studying complex systems that change over time. The approach of this study is as follows. First, a conceptual model is created incorporating the key components of the urban human-water system. Subsequently, only those components are selected that are both relevant and show causal loop behavior. Lastly, the causal narratives are translated into mathematical relationships. The outcome will be a simple model that shows only those characteristics with which we are able to explore the two-way coupling between the societal behavior and the water system we depend on.
NASA Astrophysics Data System (ADS)
Herring, A. L.; Li, Z.; Middleton, J.; Varslot, T.; McClure, J. E.; Sheppard, A.
2017-12-01
Multicomponent lattice-Boltzmann (LB) modeling is widely applied to study two-phase flow in various porous media. However, the impact on LB modeling of the fundamental trade-off between image resolution and field of view has received relatively little attention. This is important since 3D images of geological samples rarely have both sufficient resolution to capture fine structure and sufficient field of view to capture a full representative elementary volume of the medium. To optimize the simulations, it is important to know the minimum number of grid points that LB methods require to deliver physically meaningful results, and allow for the sources of measurement uncertainty to be appropriately balanced. In this work, we study the behavior of the Shan-Chen (SC) and Rothman-Keller (RK) models when the phase interfacial radius of curvature and the feature size of the medium approach the discrete unit size of the computational grid. Both simple, small-scale test geometries and real porous media are considered. Models' behavior in the extreme discrete limit is classified ranging from gradual loss of accuracy to catastrophic numerical breakdown. Based on this study, we provide guidance for experimental data collection and how to apply the LBM to accurately resolve physics of interest for two-fluid flow in porous media. Resolution effects are particularly relevant to the study of low-porosity systems, including fractured materials, when the typical pore width may only be a few voxels across.Overall, we find that the shortcoming of the SC model predominantly arises from the strongly pressure-dependent miscibility of the fluid components, where small droplets with high interfacial curvature have an exaggerated tendency to dissolve into the surrounding fluid. For the RK model, the most significant shortcoming is unphysical flow of non-wetting phase through narrow channels and crevices (2 voxels across or smaller), which we observed both in simple capillary tube and realistic porous medium. This process generates unphysical non-wetting phase ganglia that are hard to distinguish from ganglia of physical origin (e.g. arising from snap-off). While both methods have advantages and shortcomings, the RK model with modern enhancements seems to exhibit fewer instabilities, and is more suitable for system of low miscibility.
Classification of simple vegetation types using POLSAR image data
NASA Technical Reports Server (NTRS)
Freeman, A.
1993-01-01
Mapping basic vegetation or land cover types is a fairly common problem in remote sensing. Knowledge of the land cover type is a key input to algorithms which estimate geophysical parameters, such as soil moisture, surface roughness, leaf area index or biomass from remotely sensed data. In an earlier paper, an algorithm for fitting a simple three-component scattering model to POLSAR data was presented. The algorithm yielded estimates for surface scatter, double-bounce scatter and volume scatter for each pixel in a POLSAR image data set. In this paper, we show how the relative levels of each of the three components can be used as inputs to simple classifier for vegetation type. Vegetation classes include no vegetation cover (e.g. bare soil or desert), low vegetation cover (e.g. grassland), moderate vegetation cover (e.g. fully developed crops), forest and urban areas. Implementation of the approach requires estimates for the three components from all three frequencies available using the NASA/JPL AIRSAR, i.e. C-, L- and P-bands. The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
Spin correlations in quantum wires
NASA Astrophysics Data System (ADS)
Sun, Chen; Pokrovsky, Valery L.
2015-04-01
We consider theoretically spin correlations in a one-dimensional quantum wire with Rashba-Dresselhaus spin-orbit interaction (RDI). The correlations of noninteracting electrons display electron spin resonance at a frequency proportional to the RDI coupling. Interacting electrons, upon varying the direction of the external magnetic field, transit from the state of Luttinger liquid (LL) to the spin-density wave (SDW) state. We show that the two-time total-spin correlations of these states are significantly different. In the LL, the projection of total spin to the direction of the RDI-induced field is conserved and the corresponding correlator is equal to zero. The correlators of two components perpendicular to the RDI field display a sharp electron-spin resonance driven by the RDI-induced intrinsic field. In contrast, in the SDW state, the longitudinal projection of spin dominates, whereas the transverse components are suppressed. This prediction indicates a simple way for an experimental diagnostic of the SDW in a quantum wire. We point out that the Luttinger model does not respect the spin conservation since it assumes the infinite Fermi sea. We propose a proper cutoff to correct this failure.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Towards large scale modelling of wetland water dynamics in northern basins.
NASA Astrophysics Data System (ADS)
Pedinotti, V.; Sapriza, G.; Stone, L.; Davison, B.; Pietroniro, A.; Quinton, W. L.; Spence, C.; Wheater, H. S.
2015-12-01
Understanding the hydrological behaviour of low topography, wetland-dominated sub-arctic areas is one major issue needed for the improvement of large scale hydrological models. These wet organic soils cover a large extent of Northern America and have a considerable impact on the rainfall-runoff response of a catchment. Moreover their strong interactions with the lower atmosphere and the carbon cycle make of these areas a noteworthy component of the regional climate system. In the framework of the Changing Cold Regions Network (CCRN), this study aims at providing a model for wetland water dynamics that can be used for large scale applications in cold regions. The modelling system has two main components : a) the simulation of surface runoff using the Modélisation Environmentale Communautaire - Surface and Hydrology (MESH) land surface model driven with several gridded atmospheric datasets and b) the routing of surface runoff using the WATROUTE channel scheme. As a preliminary study, we focus on two small representative study basins in Northern Canada : Scotty Creek in the lower Liard River valley of the Northwest Territories and Baker Creek, located a few kilometers north of Yellowknife. Both areas present characteristic landscapes dominated by a series of peat plateaus, channel fens, small lakes and bogs. Moreover, they constitute important fieldwork sites with detailed data to support our modelling study. The challenge of our new wetland model is to represent the hydrological functioning of the various landscape units encountered in those watersheds and their interactions using simple numerical formulations that can be later extended to larger basins such as the Mackenzie river basin. Using observed datasets, the performance of the model to simulate the temporal evolution of hydrological variables such as the water table depth, frost table depth and discharge is assessed.
Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment
NASA Astrophysics Data System (ADS)
Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav
2010-05-01
Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis
Understanding Business Analytics
2015-01-05
analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Littell, Justin D.
2017-01-01
This paper describes the development of input properties for a continuum damage mechanics based material model, Mat 58, within LS-DYNA(Registered Trademark) to simulate the response of a graphite-Kevlar(Registered Trademark) hybrid plain weave fabric. A limited set of material characterization tests were performed on the hybrid graphite-Kevlar(Registered Trademark) fabric. Simple finite element models were executed in LS-DYNA(Registered Trademark) to simulate the material characterization tests and to verify the Mat 58 material model. Once verified, the Mat 58 model was used in finite element models of two composite energy absorbers: a conical-shaped design, designated the "conusoid," fabricated of four layers of hybrid graphite-Kevlar(Registered Trademark) fabric; and, a sinusoidal-shaped foam sandwich design, designated the "sinusoid," fabricated of the same hybrid fabric face sheets with a foam core. Dynamic crush tests were performed on components of the two energy absorbers, which were designed to limit average vertical accelerations to 25- to 40-g, to minimize peak crush loads, and to generate relatively long crush stroke values under dynamic loading conditions. Finite element models of the two energy absorbers utilized the Mat 58 model that had been verified through material characterization testing. Excellent predictions of the dynamic crushing response were obtained.
NASA Astrophysics Data System (ADS)
Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.
2016-12-01
Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision. The case-study example demonstrates the value of exploring multiple working hypotheses, in the form of multiple alternative model components.
Martian regolith geochemistry and sampling techniques
NASA Technical Reports Server (NTRS)
Clark, B. C.
1988-01-01
Laboratory study of samples of the intermediate and fine-grained regolith, including duricrust peds, is a fundamental prerequisite for understanding the types of physical and chemical weathering processes on Mars. The extraordinary importance of such samples is their relevance to understanding past changes in climate, availability (and possible physical state) of water, eolian forces, the thermal and chemical influences of volcanic and impact processes, and the inventory and fates of Martian volatiles. Fortunately, this regolith material appears to be ubiquitous over the Martian surface, and should be available at many different landing sites. Viking data has been interpreted to indicate a smectite-rich regolith material, implying extensive weathering involving aqueous activity and geochemical alteration. An all-igneous source of the Martian fines has also been proposed. The X-ray fluorescence measurement data set can now be fully explained in terms of a simple two-component model. The first component is silicate, having strong geochemical similarities with Shergottites, but not other SNC meteorites. The second component is salt. Variations in these components could produce silicate and salt-rich beds, the latter being of high potential importance for microenvironments in which liquid water (brines) could exist. It therefore would be desirable to scan the surface of the regolith for such prospects.
Martian regolith geochemistry and sampling techniques
NASA Astrophysics Data System (ADS)
Clark, B. C.
Laboratory study of samples of the intermediate and fine-grained regolith, including duricrust peds, is a fundamental prerequisite for understanding the types of physical and chemical weathering processes on Mars. The extraordinary importance of such samples is their relevance to understanding past changes in climate, availability (and possible physical state) of water, eolian forces, the thermal and chemical influences of volcanic and impact processes, and the inventory and fates of Martian volatiles. Fortunately, this regolith material appears to be ubiquitous over the Martian surface, and should be available at many different landing sites. Viking data has been interpreted to indicate a smectite-rich regolith material, implying extensive weathering involving aqueous activity and geochemical alteration. An all-igneous source of the Martian fines has also been proposed. The X-ray fluorescence measurement data set can now be fully explained in terms of a simple two-component model. The first component is silicate, having strong geochemical similarities with Shergottites, but not other SNC meteorites. The second component is salt. Variations in these components could produce silicate and salt-rich beds, the latter being of high potential importance for microenvironments in which liquid water (brines) could exist. It therefore would be desirable to scan the surface of the regolith for such prospects.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
USDA-ARS?s Scientific Manuscript database
A rapid, simple, and reliable flow-injection mass spectrometric (FIMS) method was developed to discriminate two major Echinacea species (E. purpurea and E. angustifolia) samples. Fifty-eight Echinacea samples collected from United States were analyzed using FIMS. Principle component analysis (PCA) a...
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.
Abdelnour, Farras; Dayan, Michael; Devinsky, Orrin; Thesen, Thomas; Raj, Ashish
2018-05-15
How structural connectivity (SC) gives rise to functional connectivity (FC) is not fully understood. Here we mathematically derive a simple relationship between SC measured from diffusion tensor imaging, and FC from resting state fMRI. We establish that SC and FC are related via (structural) Laplacian spectra, whereby FC and SC share eigenvectors and their eigenvalues are exponentially related. This gives, for the first time, a simple and analytical relationship between the graph spectra of structural and functional networks. Laplacian eigenvectors are shown to be good predictors of functional eigenvectors and networks based on independent component analysis of functional time series. A small number of Laplacian eigenmodes are shown to be sufficient to reconstruct FC matrices, serving as basis functions. This approach is fast, and requires no time-consuming simulations. It was tested on two empirical SC/FC datasets, and was found to significantly outperform generative model simulations of coupled neural masses. Copyright © 2018. Published by Elsevier Inc.
Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems
NASA Astrophysics Data System (ADS)
Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
LINE-OF-SIGHT SHELL STRUCTURE OF THE CYGNUS LOOP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, Hiroyuki; Tsunemi, Hiroshi; Katsuda, Satoru
We conducted a comprehensive study on the shell structure of the Cygnus Loop using 41 observation data obtained by the Suzaku and the XMM-Newton satellites. To investigate the detailed plasma structure of the Cygnus Loop, we divided our fields of view into 1042 box regions. From the spectral analysis, the spectra obtained from the limb of the Loop are well fitted by the single-component non-equilibrium ionization plasma model. On the other hand, the spectra obtained from the inner regions are well fitted by the two-component model. As a result, we confirmed that the low-temperature and high-temperature components originated from themore » surrounding interstellar matter (ISM) and the ejecta of the Loop, respectively. From the best-fit results, we showed a flux distribution of the ISM component. The distribution clearly shows the limb-brightening structure, and we found out some low-flux regions. Among them, the south blowout region has the lowest flux. We also found other large low-flux regions at slightly west and northeast from the center. We estimated the former thin shell region to be approx1.{sup 0}3 in diameter and concluded that there exists a blowout along the line of sight in addition to the south blowout. We also calculated the emission measure distribution of the ISM component and showed that the Cygnus Loop is far from the result obtained by a simple Sedov evolution model. From the results, we support that the Cygnus Loop originated from a cavity explosion. The emission measure distribution also suggests that the cavity-wall density is higher in the northeast than that in the southwest. These results suggest that the thickness of the cavity wall surrounding the Cygnus Loop is not uniform.« less
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
High-Resolution Study of the First Stretching Overtones of H3Si79Br.
Ceausu; Graner; Bürger; Mkadmi; Pracna; Lafferty
1998-11-01
The Fourier transform infrared spectrum of monoisotopic H3Si79Br (resolution 7.7 x 10(-3) cm-1) was studied from 4200 to 4520 cm-1, in the region of the first overtones of the Si-H stretching vibration. The investigation of the spectrum revealed the presence of two band systems, the first consisting of one parallel (nu0 = 4340.2002 cm-1) and one perpendicular (nu0 = 4342.1432 cm-1) strong component, and the second of one parallel (nu0 = 4405.789 cm-1) and one perpendicular (nu0 = 4416.233 cm-1) weak component. The rovibrational analysis shows strong local perturbations for both strong and weak systems. Seven hundred eighty-one nonzero-weighted transitions belonging to the strong system [the (200) manifold in the local mode picture] were fitted to a simple model involving a perpendicular component interacting by a weak Coriolis resonance with a parallel component. The most severely perturbed transitions (whose ||obs-calc || values exceeded 3 x 10(-3) cm-1) were given zero weights. The standard deviations of the fit were 1.0 x 10(-3) and 0.69 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. The weak band system, severely perturbed by many "dark" perturbers, was fitted to a model involving one parallel and one perpendicular band, connected by a Coriolis-type resonance. The K" . DeltaK = +10 to +18 subbands of the perpendicular component, which showed very high observed - calculated values ( approximately 0.5 cm-1), were excluded from this calculation. The standard deviations of the fit were 11 x 10(-3) and 13 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. Copyright 1998 Academic Press.
Simulation of upwind maneuvering of a sailing yacht
NASA Astrophysics Data System (ADS)
Harris, Daniel Hartrick
A time domain maneuvering simulation of an IACC class yacht suitable for the analysis of unsteady upwind sailing including tacking is presented. The simulation considers motions in six degrees of freedom. The hydrodynamic and aerodynamic loads are calculated primarily with unsteady potential theory supplemented by empirical viscous models. The hydrodynamic model includes the effects of incident waves. Control of the rudder is provided by a simple rate feedback autopilot which is augmented with open loop additions to mimic human steering. The hydrodynamic models are based on the superposition of force components. These components fall into two groups, those which the yacht will experience in calm water, and those due to incident waves. The calm water loads are further divided into zero Froude number, or "double body" maneuvering loads, hydrostatic loads, gravitational loads, free surface radiation loads, and viscous/residual loads. The maneuvering loads are calculated with an unsteady panel code which treats the instantaneous geometry of the yacht below the undisturbed free surface. The free surface radiation loads are calculated via convolution of impulse response functions derived from seakeeping strip theory. The viscous/residual loads are based upon empirical estimates. The aerodynamic model consists primarily of a database of steady state sail coefficients. These coefficients treat the individual contributions to the total sail force of a number of chordwise strips on both the main and jib. Dynamic effects are modeled by using the instantaneous incident wind velocity and direction as the independent variables for the sail load contribution of each strip. The sail coefficient database was calculated numerically with potential methods and simple empirical viscous corrections. Additional aerodynamic load calculations are made to determine the parasitic contributions of the rig and hull. Validation studies compare the steady sailing hydro and aerodynamic loads, seaway induced motions, added resistance in waves, and tacking performance with trials data and other sources. Reasonable agreement is found in all cases.
Using simple environmental variables to estimate below-ground productivity in grasslands
Gill, R.A.; Kelly, R.H.; Parton, W.J.; Day, K.A.; Jackson, R.B.; Morgan, J.A.; Scurlock, J.M.O.; Tieszen, L.L.; Castle, J.V.; Ojima, D.S.; Zhang, X.S.
2002-01-01
In many temperate and annual grasslands, above-ground net primary productivity (NPP) can be estimated by measuring peak above-ground biomass. Estimates of below-ground net primary productivity and, consequently, total net primary productivity, are more difficult. We addressed one of the three main objectives of the Global Primary Productivity Data Initiative for grassland systems to develop simple models or algorithms to estimate missing components of total system NPP. Any estimate of below-ground NPP (BNPP) requires an accounting of total root biomass, the percentage of living biomass and annual turnover of live roots. We derived a relationship using above-ground peak biomass and mean annual temperature as predictors of below-ground biomass (r2 = 0.54; P = 0.01). The percentage of live material was 0.6, based on published values. We used three different functions to describe root turnover: constant, a direct function of above-ground biomass, or as a positive exponential relationship with mean annual temperature. We tested the various models against a large database of global grassland NPP and the constant turnover and direct function models were approximately equally descriptive (r2 = 0.31 and 0.37), while the exponential function had a stronger correlation with the measured values (r2 = 0.40) and had a better fit than the other two models at the productive end of the BNPP gradient. When applied to extensive data we assembled from two grassland sites with reliable estimates of total NPP, the direct function was most effective, especially at lower productivity sites. We provide some caveats for its use in systems that lie at the extremes of the grassland gradient and stress that there are large uncertainties associated with measured and modelled estimates of BNPP.
Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert
2014-05-27
Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.
A Selective-Echo Method for Chemical-Shift Imaging of Two-Component Systems
NASA Astrophysics Data System (ADS)
Gerald, Rex E., II; Krasavin, Anatoly O.; Botto, Robert E.
A simple and effective method for selectively imaging either one of two chemical species in a two-component system is presented and demonstrated experimentally. The pulse sequence employed, selective- echo chemical- shift imaging (SECSI), is a hybrid (frequency-selective/ T1-contrast) technique that is executed in a short period of time, utilizes the full Boltzmann magnetization of each chemical species to form the corresponding image, and requires only hard pulses of quadrature phase. This approach provides a direct and unambiguous representation of the spatial distribution of the two chemical species. In addition, the performance characteristics and the advantages of the SECSI sequence are compared on a common basis to those of other pulse sequences.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Colorimetric Sensor Array for White Wine Tasting.
Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In
2015-07-24
A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry.
Colorimetric Sensor Array for White Wine Tasting
Chung, Soo; Park, Tu San; Park, Soo Hyun; Kim, Joon Yong; Park, Seongmin; Son, Daesik; Bae, Young Min; Cho, Seong In
2015-01-01
A colorimetric sensor array was developed to characterize and quantify the taste of white wines. A charge-coupled device (CCD) camera captured images of the sensor array from 23 different white wine samples, and the change in the R, G, B color components from the control were analyzed by principal component analysis. Additionally, high performance liquid chromatography (HPLC) was used to analyze the chemical components of each wine sample responsible for its taste. A two-dimensional score plot was created with 23 data points. It revealed clusters created from the same type of grape, and trends of sweetness, sourness, and astringency were mapped. An artificial neural network model was developed to predict the degree of sweetness, sourness, and astringency of the white wines. The coefficients of determination (R2) for the HPLC results and the sweetness, sourness, and astringency were 0.96, 0.95, and 0.83, respectively. This research could provide a simple and low-cost but sensitive taste prediction system, and, by helping consumer selection, will be able to have a positive effect on the wine industry. PMID:26213946
Internalizing Trajectories in Young Boys and Girls: The Whole Is Not a Simple Sum of Its Parts
ERIC Educational Resources Information Center
Carter, Alice S.; Godoy, Leandra; Wagmiller, Robert L.; Veliz, Philip; Marakovitz, Susan; Briggs-Gowan, Margaret J.
2010-01-01
There is support for a differentiated model of early internalizing emotions and behaviors, yet researchers have not examined the course of multiple components of an internalizing domain across early childhood. In this paper we present growth models for the Internalizing domain of the Infant-Toddler Social and Emotional Assessment and its component…
An Inexpensive 2-D and 3-D Model of the Sarcomere as a Teaching Aid
ERIC Educational Resources Information Center
Rios, Vitor Passos; Bonfim, Vanessa Maria Gomes
2013-01-01
To address a common problem of teaching the sliding filament theory (that is, students have difficulty in visualizing how the component proteins of the sarcomere differ, how they organize themselves into a single working unit, and how they function in relation to each other), we have devised a simple model, with inexpensive materials, to be built…
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
ERIC Educational Resources Information Center
Kim, Young-Suk Grace
2017-01-01
Pathways of relations of language, cognitive, and literacy skills (i.e., working memory, vocabulary, grammatical knowledge, inference, comprehension monitoring, word reading, and listening comprehension) to reading comprehension were examined by comparing four variations of direct and indirect effects model of reading. Results from 350…
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Improved evaluation of optical depth components from Langley plot data
NASA Technical Reports Server (NTRS)
Biggar, S. F.; Gellman, D. I.; Slater, P. N.
1990-01-01
A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.
Simple Thermal Environment Model (STEM) User's Guide
NASA Technical Reports Server (NTRS)
Justus, C.G.; Batts, G. W.; Anderson, B. J.; James, B. F.
2001-01-01
This report presents a Simple Thermal Environment Model (STEM) for determining appropriate engineering design values to specify the thermal environment of Earth-orbiting satellites. The thermal environment of a satellite, consists of three components: (1) direct solar radiation, (2) Earth-atmosphere reflected shortwave radiation, as characterized by Earth's albedo, and (3) Earth-atmosphere-emitted outgoing longwave radiation (OLR). This report, together with a companion "guidelines" report provides methodology and guidelines for selecting "design points" for thermal environment parameters for satellites and spacecraft systems. The methods and models reported here are outgrowths of Earth Radiation Budget Experiment (ERBE) satellite data analysis and thermal environment specifications discussed by Anderson and Smith (1994). In large part, this report is intended to update (and supersede) those results.
Doherty, Orla; Conway, Thomas; Conway, Richard; Murray, Gerard; Casey, Vincent
2017-01-01
Noseband tightness is difficult to assess in horses participating in equestrian sports such as dressage, show jumping and three-day-eventing. There is growing concern that nosebands are commonly tightened to such an extent as to restrict normal equine behaviour and possibly cause injury. In the absence of a clear agreed definition of noseband tightness, a simple model of the equine nose-noseband interface environment was developed in order to guide further studies in this area. The normal force component of the noseband tensile force was identified as the key contributor to sub-noseband tissue compression. The model was used to inform the design of a digital tightness gauge which could reliably measure the normal force component of the noseband tensile force. A digital tightness gauge was developed to measure this parameter under nosebands fitted to bridled horses. Results are presented for field tests using two prototype designs. Prototype version three was used in field trial 1 (n = 15, frontal nasal plane sub-noseband site). Results of this trial were used to develop an ergonomically designed prototype, version 4, which was tested in a second field trial (n = 12, frontal nasal plane and lateral sub-noseband site). Nosebands were set to three tightness settings in each trial as judged by a single rater using an International Society for Equitation Science (ISES) taper gauge. Normal forces in the range 7-95 N were recorded at the frontal nasal plane while a lower range 1-28 N was found at the lateral site for the taper gauge range used in the trials. The digital tightness gauge was found to be simple to use, reliable, and safe and its use did not agitate the animals in any discernable way. A simple six point tightness scale is suggested to aid regulation implementation and the control of noseband tightness using normal force measurement as the objective tightness discriminant.
A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow
NASA Technical Reports Server (NTRS)
Oseguera, Rosa M.; Bowles, Roland L.
1988-01-01
A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.
Linear models for calculating digestibile energy for sheep diets.
Fonnesbeck, P V; Christiansen, M L; Harris, L E
1981-05-01
Equations for estimating the digestible energy (DE) content of sheep diets were generated from the chemical contents and a factorial description of diets fed to lambs in digestion trials. The diet factors were two forages (alfalfa and grass hay), harvested at three stages of maturity (late vegetative, early bloom and full bloom), fed in two ingredient combinations (all hay or a 50:50 hay and corn grain mixture) and prepared by two forage texture processes (coarsely chopped or finely chopped and pelleted). The 2 x 3 x 2 x 2 factorial arrangement produced 24 diet treatments. These were replicated twice, for a total of 48 lamb digestion trials. In model 1 regression equations, DE was calculated directly from chemical composition of the diet. In model 2, regression equations predicted the percentage of digested nutrient from the chemical contents of the diet and then DE of the diet was calculated as the sum of the gross energy of the digested organic components. Expanded forms of model 1 and model 2 were also developed that included diet factors as qualitative indicator variables to adjust the regression constant and regression coefficients for the diet description. The expanded forms of the equations accounted for significantly more variation in DE than did the simple models and more accurately estimated DE of the diet. Information provided by the diet description proved as useful as chemical analyses for the prediction of digestibility of nutrients. The statistics indicate that, with model 1, neutral detergent fiber and plant cell wall analyses provided as much information for the estimation of DE as did model 2 with the combined information from crude protein, available carbohydrate, total lipid, cellulose and hemicellulose. Regression equations are presented for estimating DE with the most currently analyzed organic components, including linear and curvilinear variables and diet factors that significantly reduce the standard error of the estimate. To estimate De of a diet, the user utilizes the equation that uses the chemical analysis information and diet description most effectively.
What is Neptune's D/H ratio really telling us about its water abundance?
NASA Astrophysics Data System (ADS)
Ali-Dib, Mohamad; Lakhlani, Gunjan
2018-05-01
We investigate the deep-water abundance of Neptune using a simple two-component (core + envelope) toy model. The free parameters of the model are the total mass of heavy elements in the planet (Z), the mass fraction of Z in the envelope (fenv), and the D/H ratio of the accreted building blocks (D/Hbuild).We systematically search the allowed parameter space on a grid and constrain it using Neptune's bulk carbon abundance, D/H ratio, and interior structure models. Assuming solar C/O ratio and cometary D/H for the accreted building blocks are forming the planet, we can fit all of the constraints if less than ˜15 per cent of Z is in the envelope (f_{env}^{median} ˜ 7 per cent), and the rest is locked in a solid core. This model predicts a maximum bulk oxygen abundance in Neptune of 65× solar value. If we assume a C/O of 0.17, corresponding to clathrate-hydrates building blocks, we predict a maximum oxygen abundance of 200× solar value with a median value of ˜140. Thus, both cases lead to oxygen abundance significantly lower than the preferred value of Cavalié et al. (˜540× solar), inferred from model-dependent deep CO observations. Such high-water abundances are excluded by our simple but robust model. We attribute this discrepancy to our imperfect understanding of either the interior structure of Neptune or the chemistry of the primordial protosolar nebula.
NASA Astrophysics Data System (ADS)
Donker, N. H. W.
2001-01-01
A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2016-06-14
A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gesture Based Control and EMG Decomposition
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.
2005-01-01
This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.
Papadelis, Christos; Eickhoff, Simon B; Zilles, Karl; Ioannides, Andreas A
2011-01-01
This study combines source analysis imaging data for early somatosensory processing and the probabilistic cytoarchitectonic maps (PCMs). Human somatosensory evoked fields (SEFs) were recorded by stimulating left and right median nerves. Filtering the recorded responses in different frequency ranges identified the most responsive frequency band. The short-latency averaged SEFs were analyzed using a single equivalent current dipole (ECD) model and magnetic field tomography (MFT). The identified foci of activity were superimposed with PCMs. Two major components of opposite polarity were prominent around 21 and 31 ms. A weak component around 25 ms was also identified. For the most responsive frequency band (50-150 Hz) ECD and MFT revealed one focal source at the contralateral Brodmann area 3b (BA3b) at the peak of N20. The component ~25 ms was localised in Brodmann area 1 (BA1) in 50-150 Hz. By using ECD, focal generators around 28-30 ms located initially in BA3b and 2 ms later to BA1. MFT also revealed two focal sources - one in BA3b and one in BA1 for these latencies. Our results provide direct evidence that the earliest cortical response after median nerve stimulation is generated within the contralateral BA3b. BA1 activation few milliseconds later indicates a serial mode of somatosensory processing within cytoarchitectonic SI subdivisions. Analysis of non-invasive magnetoencephalography (MEG) data and the use of PCMs allow unambiguous and quantitative (probabilistic) interpretation of cytoarchitectonic identity of activated areas following median nerve stimulation, even with the simple ECD model, but only when the model fits the data extremely well. Copyright © 2010 Elsevier Inc. All rights reserved.
Self-assembly of Archimedean tilings with enthalpically and entropically patchy polygons.
Millan, Jaime A; Ortiz, Daniel; van Anders, Greg; Glotzer, Sharon C
2014-03-25
Considerable progress in the synthesis of anisotropic patchy nanoplates (nanoplatelets) promises a rich variety of highly ordered two-dimensional superlattices. Recent experiments of superlattices assembled from nanoplates confirm the accessibility of exotic phases and motivate the need for a better understanding of the underlying self-assembly mechanisms. Here, we present experimentally accessible, rational design rules for the self-assembly of the Archimedean tilings from polygonal nanoplates. The Archimedean tilings represent a model set of target patterns that (i) contain both simple and complex patterns, (ii) are comprised of simple regular shapes, and (iii) contain patterns with potentially interesting materials properties. Via Monte Carlo simulations, we propose a set of design rules with general applicability to one- and two-component systems of polygons. These design rules, specified by increasing levels of patchiness, correspond to a reduced set of anisotropy dimensions for robust self-assembly of the Archimedean tilings. We show for which tilings entropic patches alone are sufficient for assembly and when short-range enthalpic interactions are required. For the latter, we show how patchy these interactions should be for optimal yield. This study provides a minimal set of guidelines for the design of anisostropic patchy particles that can self-assemble all 11 Archimedean tilings.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Two-lattice models of trace element behavior: A response
NASA Astrophysics Data System (ADS)
Ellison, Adam J. G.; Hess, Paul C.
1990-08-01
Two-lattice melt components of Bottinga and Weill (1972), Nielsen and Drake (1979), and Nielsen (1985) are applied to major and trace element partitioning between coexisting immiscible liquids studied by RYERSON and Hess (1978) and Watson (1976). The results show that (1) the set of components most successful in one system is not necessarily portable to another system; (2) solution non-ideality within a sublattice severely limits applicability of two-lattice models; (3) rigorous application of two-lattice melt components may yield effective partition coefficients for major element components with no physical interpretation; and (4) the distinction between network-forming and network-modifying components in the sense of the two-lattice models is not clear cut. The algebraic description of two-lattice models is such that they will most successfully limit the compositional dependence of major and trace element solution behavior when the effective partition coefficient of the component of interest is essentially the same as the bulk partition coefficient of all other components within its sublattice.
NASA Astrophysics Data System (ADS)
Gerhard, Christoph; Adams, Geoff
2015-10-01
Geometric optics is at the heart of optics teaching. Some of us may remember using pins and string to test the simple lens equation at school. Matters get more complex at undergraduate/postgraduate levels as we are introduced to paraxial rays, real rays, wavefronts, aberration theory and much more. Software is essential for the later stages, and the right software can profitably be used even at school. We present two free PC programs, which have been widely used in optics teaching, and have been further developed in close cooperation with lecturers/professors in order to address the current content of the curricula for optics, photonics and lasers in higher education. PreDesigner is a single thin lens modeller. It illustrates the simple lens law with construction rays and then allows the user to include field size and aperture. Sliders can be used to adjust key values with instant graphical feedback. This tool thus represents a helpful teaching medium for the visualization of basic interrelations in optics. WinLens3DBasic can model multiple thin or thick lenses with real glasses. It shows the system focii, principal planes, nodal points, gives paraxial ray trace values, details the Seidel aberrations, offers real ray tracing and many forms of analysis. It is simple to reverse lenses and model tilts and decenters. This tool therefore provides a good base for learning lens design fundamentals. Much work has been put into offering these features in ways that are easy to use, and offer opportunities to enhance the student's background understanding.
Sketching the Invisible to Predict the Visible: From Drawing to Modeling in Chemistry.
Cooper, Melanie M; Stieff, Mike; DeSutter, Dane
2017-10-01
Sketching as a scientific practice goes beyond the simple act of inscribing diagrams onto paper. Scientists produce a wide range of representations through sketching, as it is tightly coupled to model-based reasoning. Chemists in particular make extensive use of sketches to reason about chemical phenomena and to communicate their ideas. However, the chemical sciences have a unique problem in that chemists deal with the unseen world of the atomic-molecular level. Using sketches, chemists strive to develop causal mechanisms that emerge from the structure and behavior of molecular-level entities, to explain observations of the macroscopic visible world. Interpreting these representations and constructing sketches of molecular-level processes is a crucial component of student learning in the modern chemistry classroom. Sketches also serve as an important component of assessment in the chemistry classroom as student sketches give insight into developing mental models, which allows instructors to observe how students are thinking about a process. In this paper we discuss how sketching can be used to promote such model-based reasoning in chemistry and discuss two case studies of curricular projects, CLUE and The Connected Chemistry Curriculum, that have demonstrated a benefit of this approach. We show how sketching activities can be centrally integrated into classroom norms to promote model-based reasoning both with and without component visualizations. Importantly, each of these projects deploys sketching in support of other types of inquiry activities, such as making predictions or depicting models to support a claim; sketching is not an isolated activity but is used as a tool to support model-based reasoning in the discipline. Copyright © 2017 Cognitive Science Society, Inc.
Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav
2005-09-01
A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.
Tuning the critical solution temperature of polymers by copolymerization
NASA Astrophysics Data System (ADS)
Schulz, Bernhard; Chudoba, Richard; Heyda, Jan; Dzubiella, Joachim
2015-12-01
We study statistical copolymerization effects on the upper critical solution temperature (CST) of generic homopolymers by means of coarse-grained Langevin dynamics computer simulations and mean-field theory. Our systematic investigation reveals that the CST can change monotonically or non-monotonically with copolymerization, as observed in experimental studies, depending on the degree of non-additivity of the monomer (A-B) cross-interactions. The simulation findings are confirmed and qualitatively explained by a combination of a two-component Flory-de Gennes model for polymer collapse and a simple thermodynamic expansion approach. Our findings provide some rationale behind the effects of copolymerization and may be helpful for tuning CST behavior of polymers in soft material design.
Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.
Santure, Anna W; Spencer, Hamish G
2006-08-01
The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components.
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
A simple and low-cost permanent magnet system for NMR
NASA Astrophysics Data System (ADS)
Chonlathep, K.; Sakamoto, T.; Sugahara, K.; Kondo, Y.
2017-02-01
We have developed a simple, easy to build, and low-cost magnet system for NMR, of which homogeneity is about 4 ×10-4 at 57 mT, with a pair of two commercially available ferrite magnets. This homogeneity corresponds to about 90 Hz spectral resolution at 2.45 MHz of the hydrogen Larmor frequency. The material cost of this NMR magnet system is little more than 100. The components can be printed by a 3D printer.
Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.
He, Lifeng; Chao, Yuyan; Suzuki, Kenji
2011-08-01
Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.
ModABa Model: Annual Flow Duration Curves Assessment in Ephemeral Basins
NASA Astrophysics Data System (ADS)
Pumo, Dario; Viola, Francesco; Noto, Leonardo V.
2013-04-01
A representation of the streamflow regime for a river basin is required for a variety of hydrological analyses and engineering applications, from the water resource allocation and utilization to the environmental flow management. The flow duration curve (FDC) represents a comprehensive signature of temporal runoff variability often used to synthesize catchment rainfall-runoff responses. Several models aimed to the theoretical reconstruction of the FDC have been recently developed under different approaches, and a relevant scientific knowledge specific to this topic has been already acquired. In this work, a new model for the probabilistic characterization of the daily streamflows in perennial and ephemeral catchments is introduced. The ModABa model (MODel for Annual flow duration curves assessment in intermittent BAsins) can be thought as a wide mosaic whose tesserae are frameworks, models or conceptual schemes separately developed in different recent studies. Such tesserae are harmoniously placed and interconnected, concurring together towards a unique final aim that is the reproduction of the FDC of daily streamflows in a river basin. Two separated periods within the year are firstly identified: a non-zero period, typically characterized by significant streamflows, and a dry period, that, in the cases of ephemeral basins, is the period typically characterized by absence of streamflow. The proportion of time the river is dry, providing an estimation of the probability of zero flow occurring, is empirically estimated. Then, an analysis concerning the non-zero period is performed, considering the streamflow disaggregated into a slow subsuperficial component and a fast superficial component. A recent analytical model is adopted to derive the non zero FDC relative to the subsuperficial component; this last is considered to be generated by the soil water excess over the field capacity in the permeable portion of the basin. The non zero FDC relative to the fast streamflow component is directly derived from the precipitation duration curve through a simple filter model. The fast component of streamflow is considered to be formed by two contributions that are the entire amount of rainfall falling onto the impervious portion of the basin and the excess of rainfall over a fixed threshold, defining heavy rain events, falling onto the permeable portion. The two obtained FDCs are then overlapped, providing a unique non-zero FDC relative to the total streamflow. Finally, once the probability that the river is dry and the non zero FDC are known, the annual FDC of the daily total streamflow is derived applying the theory of total probability. The model is calibrated on a small catchment with ephemeral streamflows using a long period of daily precipitation, temperature and streamflow measurements, and it is successively validated in the same basin using two different time periods. The high model performances obtained in both the validation periods, demonstrate how the model, once calibrated, is able to accurately reproduce the empirical FDC starting from easily derivable parameters arising from a basic ecohydrological knowledge of the basin and commonly available climatic data such as daily precipitation and temperatures. In this sense, the model reveals itself as a valid tool for streamflow predictions in ungauged basins.
Unbiased constraints on ultralight axion mass from dwarf spheroidal galaxies
NASA Astrophysics Data System (ADS)
González-Morales, Alma X.; Marsh, David J. E.; Peñarrubia, Jorge; Ureña-López, Luis A.
2017-12-01
It has been suggested that the internal dynamics of dwarf spheroidal galaxies (dSphs) can be used to test whether or not ultralight axions with ma ∼ 10-22 eV are a preferred dark matter candidate. However, comparisons to theoretical predictions tend to be inconclusive for the simple reason that while most cosmological models consider only dark matter, one observes only baryons. Here, we use realistic kinematic mock data catalogues of Milky Way (MW) dSph's to show that the 'mass-anisotropy degeneracy' in the Jeans equations leads to biased bounds on the axion mass in galaxies with unknown dark matter halo profiles. In galaxies with multiple chemodynamical components, this bias can be partly removed by modelling the mass enclosed within each subpopulation. However, analysis of the mock data reveals that the least-biased constraints on the axion mass result from fitting the luminosity-averaged velocity dispersion of the individual chemodynamical components directly. Applying our analysis to two dSph's with reported stellar subcomponents, Fornax and Sculptor, and assuming that the halo profile has not been acted on by baryons, yields core radii rc > 1.5 and 1.2 kpc, respectively, and ma < 0.4 × 10-22 eV at 97.5 per cent confidence. These bounds are in tension with the number of observed satellites derived from simple (but conservative) estimates of the subhalo mass function in MW-like galaxies. We discuss how baryonic feedback might affect our results, and the impact of such a small axion mass on the growth of structures in the Universe.
Simple Fall Criteria for MEMS Sensors: Data Analysis and Sensor Concept
Ibrahim, Alwathiqbellah; Younis, Mohammad I.
2014-01-01
This paper presents a new and simple fall detection concept based on detailed experimental data of human falling and the activities of daily living (ADLs). Establishing appropriate fall algorithms compatible with MEMS sensors requires detailed data on falls and ADLs that indicate clearly the variations of the kinematics at the possible sensor node location on the human body, such as hip, head, and chest. Currently, there is a lack of data on the exact direction and magnitude of each acceleration component associated with these node locations. This is crucial for MEMS structures, which have inertia elements very close to the substrate and are capacitively biased, and hence, are very sensitive to the direction of motion whether it is toward or away from the substrate. This work presents detailed data of the acceleration components on various locations on the human body during various kinds of falls and ADLs. A two-degree-of-freedom model is used to help interpret the experimental data. An algorithm for fall detection based on MEMS switches is then established. A new sensing concept based on the algorithm is proposed. The concept is based on employing several inertia sensors, which are triggered simultaneously, as electrical switches connected in series, upon receiving a true fall signal. In the case of everyday life activities, some or no switches will be triggered resulting in an open circuit configuration, thereby preventing false positive. Lumped-parameter model is presented for the device and preliminary simulation results are presented illustrating the new device concept. PMID:25006997
Polarization-dependent optical reflection ultrasonic detection
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyi; Huang, Zhiyu; Wang, Guohe; Li, Wenzhao; Li, Changhui
2017-03-01
Although ultrasound transducers based on commercial piezoelectric-material have been widely used, they generally have limited bandwidth centered at the resonant frequency. Currently, several pure-optical ultrasonic detection methods have gained increasing interest due to their wide bandwidth and high sensitivity. However, most of them require customized components (such as micro-ring, SPR, Fabry-Perot film, etc), which limit their broad implementations. In this study, we presented a simple pure-optical ultrasound detection method, called "Polarization-dependent Reflection Ultrasonic Detection" (PRUD). It detects the intensity difference between two polarization components of the probe beam that is modulated by ultrasound waves. PRUD detect the two components by using a balanced detector, which effectively suppressed much of the unwanted noise. We have achieved the sensitivity (noise equivalent pressure) to be 1.7kPa, and this can be further improved. In addition, like many other pure-optical ultrasonic detection methods, PRUD also has a flat and broad bandwidth from almost zero to over 100MHz. Besides theoretical analysis, we did a phantom study by imaging a tungsten filament to demonstrate the performance of PRUD. We believe this simple and economic method will attract both researchers and engineers in optical and ultrasound fields.
Quantitative characterization of the viscosity of a microemulsion
NASA Technical Reports Server (NTRS)
Berg, Robert F.; Moldover, Michael R.; Huang, John S.
1987-01-01
The viscosity of the three-component microemulsion water/decane/AOT has been measured as a function of temperature and droplet volume fraction. At temperatures well below the phase-separation temperature the viscosity is described by treating the droplets as hard spheres suspended in decane. Upon approaching the two-phase region from low temperature, there is a large (as much as a factor of four) smooth increase of the viscosity which may be related to the percolation-like transition observed in the electrical conductivity. This increase in viscosity is not completely consistent with either a naive electroviscous model or a simple clustering model. The divergence of the viscosity near the critical point (39 C) is superimposed upon the smooth increase. The magnitude and temperature dependence of the critical divergence are similar to that seen near the critical points of binary liquid mixtures.
AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodríguez-Ramírez, J. C.; Raga, A. C.; Lora, V.
2016-12-20
We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. Wemore » compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.« less
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
Improving cerebellar segmentation with statistical fusion
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.
Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.
Karani, Hamid; Huber, Christian
2015-02-01
In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.
Time-dependent behavior of passive skeletal muscle
NASA Astrophysics Data System (ADS)
Ahamed, T.; Rubin, M. B.; Trimmer, B. A.; Dorfmann, L.
2016-03-01
An isotropic three-dimensional nonlinear viscoelastic model is developed to simulate the time-dependent behavior of passive skeletal muscle. The development of the model is stimulated by experimental data that characterize the response during simple uniaxial stress cyclic loading and unloading. Of particular interest is the rate-dependent response, the recovery of muscle properties from the preconditioned to the unconditioned state and stress relaxation at constant stretch during loading and unloading. The model considers the material to be a composite of a nonlinear hyperelastic component in parallel with a nonlinear dissipative component. The strain energy and the corresponding stress measures are separated additively into hyperelastic and dissipative parts. In contrast to standard nonlinear inelastic models, here the dissipative component is modeled using an evolution equation that combines rate-independent and rate-dependent responses smoothly with no finite elastic range. Large deformation evolution equations for the distortional deformations in the elastic and in the dissipative component are presented. A robust, strongly objective numerical integration algorithm is used to model rate-dependent and rate-independent inelastic responses. The constitutive formulation is specialized to simulate the experimental data. The nonlinear viscoelastic model accurately represents the time-dependent passive response of skeletal muscle.
X-34 Main Propulsion System-Selected Subsystem Analyses
NASA Technical Reports Server (NTRS)
Brown, T. M.; McDonald, J. P.; Knight, K. C.; Champion, R. H., Jr.
1998-01-01
The X-34 hypersonic flight vehicle is currently under development by Orbital Sciences Corporation (Orbital). The Main Propulsion System (MPS) has been designed around the liquid propellant Fastrac rocket engine currently under development at NASA Marshall Space Flight Center. This paper presents selected analyses of MPS subsystems and components. Topics include the integration of component and system level modeling of the LOX dump subsystem and a simple terminal bubble velocity analysis conducted to guide propellant feed line design.
A Sensing System for Simultaneous Detection of Urine and its Components Using Plastic Optical Fibers
NASA Astrophysics Data System (ADS)
Ejaz, Tahseen; Takemae, Tadashi; Egami, Chikara; Tsuboi, Naoyuki
A sensing system using plastic optical fibers and reagent papers was developed for the detection of urine and abnormal level of its components simultaneously. Among several components of urine the detection of two main components namely, protein and glucose was confirmed experimentally. Three states of the papers namely dry and wet with and without change in color, were taken into consideration. These three states were divided by setting the lower and upper threshold voltages at 2.2 V and 5.5 V, respectively. This system is considered to be simple in construction, easy to operate and cost-efficient.
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Thompson, Clarissa A; Ratcliff, Roger; McKoon, Gail
2016-10-01
How do speed and accuracy trade off, and what components of information processing develop as children and adults make simple numeric comparisons? Data from symbolic and non-symbolic number tasks were collected from 19 first graders (Mage=7.12 years), 26 second/third graders (Mage=8.20 years), 27 fourth/fifth graders (Mage=10.46 years), and 19 seventh/eighth graders (Mage=13.22 years). The non-symbolic task asked children to decide whether an array of asterisks had a larger or smaller number than 50, and the symbolic task asked whether a two-digit number was greater than or less than 50. We used a diffusion model analysis to estimate components of processing in tasks from accuracy, correct and error response times, and response time (RT) distributions. Participants who were accurate on one task were accurate on the other task, and participants who made fast decisions on one task made fast decisions on the other task. Older participants extracted a higher quality of information from the stimulus arrays, were more willing to make a decision, and were faster at encoding, transforming the stimulus representation, and executing their responses. Individual participants' accuracy and RTs were uncorrelated. Drift rate and boundary settings were significantly related across tasks, but they were unrelated to each other. Accuracy was mainly determined by drift rate, and RT was mainly determined by boundary separation. We concluded that RT and accuracy operate largely independently. Copyright © 2016 Elsevier Inc. All rights reserved.
A Demonstration of Sample Segregation
ERIC Educational Resources Information Center
Fritz, Mark D.; Brumbach, Stephen B.; Hartman, JudithAnn R.
2005-01-01
The demonstration of sample segregation, which is simple, and visually compelling illustrates the importance of sample handling for students studying analytical chemistry and environmental chemistry. The mixture used in this demonstration has two components, which have big particle size, and different colors, which makes the segregation graphic.
NASA Astrophysics Data System (ADS)
Yarce, Andrés; Sebastián Rodríguez, Juan; Galvez, Julián; Gómez, Alejandro; García, Manuel J.
2017-06-01
This paper presents the development stage of a communication module for a solid propellant mid-power rocket model. The communication module was named. Simple-1 and this work considers its design, construction and testing. A rocket model Estes Ventris Series Pro II® was modified to introduce, on the top of the payload, several sensors in a CanSat form factor. The Printed Circuit Board (PCB) was designed and fabricated from Commercial Off The Shelf (COTS) components and assembled in a cylindrical rack structure similar to this small format satellite concept. The sensors data was processed using one Arduino Mini and transmitted using a radio module to a Software Defined Radio (SDR) HackRF based platform on the ground station. The Simple-1 was tested using a drone in successive releases, reaching altitudes from 200 to 300 meters. Different kind of data, in terms of altitude, position, atmospheric pressure and vehicle temperature were successfully measured, making possible the progress to a next stage of launching and analysis.
Gottingen Wind Tunnel for Testing Aircraft Models
NASA Technical Reports Server (NTRS)
Prandtl, L
1920-01-01
Given here is a brief description of the Gottingen Wind Tunnel for the testing of aircraft models, preceded by a history of its development. Included are a number of diagrams illustrating, among other things, a sectional elevation of the wind tunnel, the pressure regulator, the entrance cone and method of supporting a model for simple drag tests, a three-component balance, and a propeller testing device, all of which are discussed in the text.
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Barlow, Paul M.
1997-01-01
Steady-state, two- and three-dimensional, ground-water-flow models coupled with particle tracking were evaluated to determine their effectiveness in delineating contributing areas of wells pumping from stratified-drift aquifers of Cape Cod, Massachusetts. Several contributing areas delineated by use of the three-dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of two-dimensional analytical and numerical modeling techniques and included discontinuous areas of the water table.
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
Io: IUE observations of its atmosphere and the plasma torus
NASA Technical Reports Server (NTRS)
Ballester, G. E.; Moos, H. W.; Feldman, P. D.; Strobel, D. F.; Skinner, T. E.; Bertaux, J.-L.; Festou, M. C.
1988-01-01
Two of the main components of the atmosphere of Io, neutral oxygen and sulfur, were detected with the IUE. Four observations yield brightnesses that are similar, regardless of whether the upstream or the downstream sides of the torus plasma flow around Io is observed. A simple model requires the emissions to be produced by the interaction of O and S columns in the exospheric range with 2 eV electrons. Cooling of the 5 eV torus electrons is required prior to their interaction with the atmosphere of Io. Inconsistencies in the characteristics of the spectra that cannot be accounted for in this model require further analysis with improved atomic data. The Io plasma torus was monitored with the IUE. The long-term stability of the warm torus is established. The observed brightnesses were analyzed using a model of the torus, and variations of less than 30 percent in the composition are observed, the quantitative results being model dependent.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
NASA Astrophysics Data System (ADS)
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
The Influence of AN Interacting Vacuum Energy on the Gravitational Collapse of a Star Fluid
NASA Astrophysics Data System (ADS)
Campos, M.
2014-02-01
To explain the accelerated expansion of the universe, models with interacting dark components has been considered in the literature. Generally, the dark energy component is physically interpreted as the vacuum energy. However, at the other side of the same coin, the influence of the vacuum energy in the gravitational collapse is a topic of scientific interest. Based in a simple assumption on the collapsed rate of the matter fluid density that is altered by the inclusion of a vacuum energy component that interacts with the matter fluid, we study the final fate of the collapse process.
A 100-3000 GHz model of thermal dust emission observed by Planck, DIRBE and IRAS
NASA Astrophysics Data System (ADS)
Meisner, Aaron M.; Finkbeiner, Douglas P.
2015-01-01
We apply the Finkbeiner et al. (1999) two-component thermal dust emission model to the Planck HFI maps. This parametrization of the far-infrared dust spectrum as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody (MBB) dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. (1999) based on FIRAS and DIRBE. We also derive full-sky 6.1' resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.1' FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration (2013) single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anistropy on small angular scales. We have recently released maps and associated software utilities for obtaining thermal dust emission and reddening predictions using our Planck-based two-component model.
Interpolation on the manifold of K component GMMs.
Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas
2015-12-01
Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Schaap, Pauline; Barrantes, Israel; Minx, Pat; Sasaki, Narie; Anderson, Roger W.; Bénard, Marianne; Biggar, Kyle K.; Buchler, Nicolas E.; Bundschuh, Ralf; Chen, Xiao; Fronick, Catrina; Fulton, Lucinda; Golderer, Georg; Jahn, Niels; Knoop, Volker; Landweber, Laura F.; Maric, Chrystelle; Miller, Dennis; Noegel, Angelika A.; Peace, Rob; Pierron, Gérard; Sasaki, Taeko; Schallenberg-Rüdinger, Mareike; Schleicher, Michael; Singh, Reema; Spaller, Thomas; Storey, Kenneth B.; Suzuki, Takamasa; Tomlinson, Chad; Tyson, John J.; Warren, Wesley C.; Werner, Ernst R.; Werner-Felmayer, Gabriele; Wilson, Richard K.; Winckler, Thomas; Gott, Jonatha M.; Glöckner, Gernot; Marwan, Wolfgang
2016-01-01
Physarum polycephalum is a well-studied microbial eukaryote with unique experimental attributes relative to other experimental model organisms. It has a sophisticated life cycle with several distinct stages including amoebal, flagellated, and plasmodial cells. It is unusual in switching between open and closed mitosis according to specific life-cycle stages. Here we present the analysis of the genome of this enigmatic and important model organism and compare it with closely related species. The genome is littered with simple and complex repeats and the coding regions are frequently interrupted by introns with a mean size of 100 bases. Complemented with extensive transcriptome data, we define approximately 31,000 gene loci, providing unexpected insights into early eukaryote evolution. We describe extensive use of histidine kinase-based two-component systems and tyrosine kinase signaling, the presence of bacterial and plant type photoreceptors (phytochromes, cryptochrome, and phototropin) and of plant-type pentatricopeptide repeat proteins, as well as metabolic pathways, and a cell cycle control system typically found in more complex eukaryotes. Our analysis characterizes P. polycephalum as a prototypical eukaryote with features attributed to the last common ancestor of Amorphea, that is, the Amoebozoa and Opisthokonts. Specifically, the presence of tyrosine kinases in Acanthamoeba and Physarum as representatives of two distantly related subdivisions of Amoebozoa argues against the later emergence of tyrosine kinase signaling in the opisthokont lineage and also against the acquisition by horizontal gene transfer. PMID:26615215
Determination of the robot location in a workcell of a flexible production line
NASA Astrophysics Data System (ADS)
Banas, W.; Sekala, A.; Gwiazda, A.; Foit, K.; Hryniewicz, P.; Kost, G.
2015-11-01
Location of components of a manufacturing cell is apparently an easy task but even during the constructing of a manufacturing cell, in which is planned a production of one, simple component it is necessary, among others, to check access to all required points. The robot in a manufacturing cell must handle both machine tools located in a manufacturing cell and parts store (input and output one). It handles also transport equipment and auxiliary stands. Sometimes, during the design phase, the changes of robot location are necessary due to the limitation of access to its required working positions. Often succeeding changes of a manufacturing cell configuration are realized. They occur at the stages of visualization and simulation of robot program functioning. In special cases, it is even necessary to replace the planned robot with a robot of greater range or of a different configuration type. This article presents and describes the parameters and components which should be taken into consideration during designing robotised manufacturing cells. The main idea bases on application of advanced engineering programs to adding the designing process. Using this approach it could be possible to present the designing process of an exemplar flexible manufacturing cell intended to manufacture two similar components. The proposed model of such designed manufacturing cell could be easily extended to the manufacturing cell model in which it is possible to produce components belonging the one technological group of chosen similarity level. In particular, during the design process, one should take into consideration components which limit the ability of robot foundation. It is also important to show the method of determining the best location of robot foundation. The presented design method could also support the designing process of other robotised manufacturing cells.
NASA Technical Reports Server (NTRS)
Soman, Vishwas V.; Crosson, William L.; Laymon, Charles; Tsegaye, Teferi
1998-01-01
Soil moisture is an important component of analysis in many Earth science disciplines. Soil moisture information can be obtained either by using microwave remote sensing or by using a hydrologic model. In this study, we combined these two approaches to increase the accuracy of profile soil moisture estimation. A hydrologic model was used to analyze the errors in the estimation of soil moisture using the data collected during Huntsville '96 microwave remote sensing experiment in Huntsville, Alabama. Root mean square errors (RMSE) in soil moisture estimation increase by 22% with increase in the model input interval from 6 hr to 12 hr for the grass-covered plot. RMSEs were reduced for given model time step by 20-50% when model soil moisture estimates were updated using remotely-sensed data. This methodology has a potential to be employed in soil moisture estimation using rainfall data collected by a space-borne sensor, such as the Tropical Rainfall Measuring Mission (TRMM) satellite, if remotely-sensed data are available to update the model estimates.
Roshan, Abdul-Rahman A; Gad, Haidy A; El-Ahmady, Sherweit H; Khanbash, Mohamed S; Abou-Shoer, Mohamed I; Al-Azizi, Mohamed M
2013-08-14
This work describes a simple model developed for the authentication of monofloral Yemeni Sidr honey using UV spectroscopy together with chemometric techniques of hierarchical cluster analysis (HCA), principal component analysis (PCA), and soft independent modeling of class analogy (SIMCA). The model was constructed using 13 genuine Sidr honey samples and challenged with 25 honey samples of different botanical origins. HCA and PCA were successfully able to present a preliminary clustering pattern to segregate the genuine Sidr samples from the lower priced local polyfloral and non-Sidr samples. The SIMCA model presented a clear demarcation of the samples and was used to identify genuine Sidr honey samples as well as detect admixture with lower priced polyfloral honey by detection limits >10%. The constructed model presents a simple and efficient method of analysis and may serve as a basis for the authentication of other honey types worldwide.
A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs
NASA Astrophysics Data System (ADS)
Bouneb, I.; Kerrour, F.
2016-03-01
Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc
Partitioning autotrophic and heterotrophic respiration at Howland Forest
NASA Astrophysics Data System (ADS)
Carbone, Mariah; Hollinger, Dave; Davidson, Eric; Savage, Kathleen; Hughes, Holly
2015-04-01
Terrestrial ecosystem respiration is the combined flux of CO2 to the atmosphere from above- and below-ground, plant (autotrophic) and microbial (heterotrophic) sources. Flux measurements alone (e.g., from eddy covariance towers or soil chambers) cannot distinguish the contributions from these sources, which may change seasonally and respond differently to temperature and moisture. The development of improved process-based models that can predict how plants and microbes respond to changing environmental conditions (on seasonal, interannual, or decadal timescales) requires data from field observations and experiments to distinguish among these respiration sources. We tested the viability of partitioning of soil and ecosystem respiration into autotrophic and heterotrophic components with different approaches at the Howland Forest in central Maine, USA. These include an experimental manipulation using the classic root trenching approach and targeted ∆14CO2 measurements. For the isotopic measurements, we used a two-end member mass balance approach to determine the fraction of soil respiration from autotrophic and heterotrophic sources. When summed over the course of the growing season, the trenched chamber flux (heterotrophic) accounted for 53 ± 2% of the total control chamber flux. Over the four different 14C sampling periods, the heterotrophic component ranged from 35-55% and the autotrophic component ranges 45-65% of the total flux. Next steps will include assessing the value of the flux partitioning for constraining a simple ecosystem model using a model-data fusion approach to reduce uncertainties in estimates of NPP and simulation of future soil C stocks and fluxes.
On the Longitudinal Component of Paraxial Fields
ERIC Educational Resources Information Center
Carnicer, Artur; Juvells, Ignasi; Maluenda, David; Martinez-Herrero, Rosario; Mejias, Pedro M.
2012-01-01
The analysis of paraxial Gaussian beams features in most undergraduate courses in laser physics, advanced optics and photonics. These beams provide a simple model of the field generated in the resonant cavities of lasers, thus constituting a basic element for understanding laser theory. Usually, uniformly polarized beams are considered in the…
Neuromorphic Computing: A Post-Moore's Law Complementary Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuman, Catherine D; Birdwell, John Douglas; Dean, Mark
2016-01-01
We describe our approach to post-Moore's law computing with three neuromorphic computing models that share a RISC philosophy, featuring simple components combined with a flexible and programmable structure. We envision these to be leveraged as co-processors, or as data filters to provide in situ data analysis in supercomputing environments.
Validation analysis of probabilistic models of dietary exposure to food additives.
Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J
2003-10-01
The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercriticalmore » isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.« less
PKS 2155-304 relativistically beamed synchrotron radiation from BL LAC object
NASA Technical Reports Server (NTRS)
Urry, C. M.; Mushotzky, R. F.
1981-01-01
The newly discovered BL Lacertae object, PKS 2155-304, was observed with the medium and high intensity energy detectors of the HEAO-1 A2 experiment. The variability by a factor of two in less than a day reported by Snyder, et al (1979) is confirmed. Two spectra, obtained a year apart, while the satellite was in scanning mode, are well fit by simple power laws with energy spectral index alpha sub 1 equals approximately 1.4. A third spectrum, of higher statistical quality, obtained while the satellite was pointed at its source, has has two components. An acceptable fit was obtained using a two power law model, with indices alpha sub 1 equals 2.0 (+1.2, -0.6) and alpha sub 2 equals -1.5 (+1.5, -2.3). An interpretation of the overall spectrum from radio through X-rays in terms of a synchrotron self-Compton model gives a good description of the data if allowance is made for relativistic beaming. Thus, from a consideration of the spectrum, combined with an estimate of the size of the source, the presence of jets is inferred without their observation.
Additivity of nonlinear biomass equations
Bernard R. Parresol
2001-01-01
Two procedures that guarantee the property of additivity among the components of tree biomass and total tree biomass utilizing nonlinear functions are developed. Procedure 1 is a simple combination approach, and procedure 2 is based on nonlinear joint-generalized regression (nonlinear seemingly unrelated regressions) with parameter restrictions. Statistical theory is...
A model for intergalactic filaments and galaxy formation during the first gigayear
NASA Astrophysics Data System (ADS)
Harford, A. Gayler; Hamilton, Andrew J. S.
2017-11-01
We propose a physically based, analytic model for intergalactic filaments during the first gigayear of the universe. The structure of a filament is based upon a gravitationally bound, isothermal cylinder of gas. The model successfully predicts for a cosmological simulation the total mass per unit length of a filament (dark matter plus gas) based solely upon the sound speed of the gas component, contrary to the expectation for collisionless dark matter aggregation. In the model, the gas, through its hydrodynamic properties, plays a key role in filament structure rather than being a passive passenger in a preformed dark matter potential. The dark matter of a galaxy follows the classic equation of collapse of a spherically symmetric overdensity in an expanding universe. In contrast, the gas usually collapses more slowly. The relative rates of collapse of these two components for individual galaxies can explain the varying baryon deficits of the galaxies under the assumption that matter moves along a single filament passing through the galaxy centre, rather than by spherical accretion. The difference in behaviour of the dark matter and gas can be simply and plausibly related to the model. The range of galaxies studied includes that of the so-called too big to fail galaxies, which are thought to be problematic for the standard Λ cold dark matter model of the universe. The isothermal-cylinder model suggests a simple explanation for why these galaxies are, unaccountably, missing from the night sky.
Gebauer, Petr; Malá, Zdena; Boček, Petr
2014-03-01
This contribution is the third part of the project on strategies used in the selection and tuning of electrolyte systems for anionic ITP with ESI-MS detection. The strategy presented here is based on the creation of self-maintained ITP subsystems in moving-boundary systems and describes two new principal approaches offering physical separation of analyte zones from their common ITP stack and/or simultaneous selective stacking of two different analyte groups. Both strategic directions are based on extending the number of components forming the electrolyte system by adding a third suitable anion. The first method is the application of the spacer technique to moving-boundary anionic ITP systems, the second method is a technique utilizing a moving-boundary ITP system in which two ITP subsystems exist and move with mutually different velocities. It is essential for ESI detection that both methods can be based on electrolyte systems containing only several simple chemicals, such as simple volatile organic acids (formic and acetic) and their ammonium salts. The properties of both techniques are defined theoretically and discussed from the viewpoint of their applicability to trace analysis by ITP-ESI-MS. Examples of system design for selected model separations of preservatives and pharmaceuticals illustrate the validity of the theoretical model and application potential of the proposed techniques by both computer simulations and experiments. Both new methods enhance the application range of ITP-MS and may be beneficial particularly for complex multicomponent samples or for analytes with identical molecular mass. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP1) Observations: Galactic Foreground Emission
NASA Technical Reports Server (NTRS)
Gold, B.; Bennett, C.L.; Larson, D.; Hill, R.S.; Odegard, N.; Weiland, J.L.; Hinshaw, G.; Kogut, A.; Wollack, E.; Page, L.;
2008-01-01
We present a new estimate of foreground emission in the WMAP data, using a Markov chain Monte Carlo (MCMC) method. The new technique delivers maps of each foreground component for a variety of foreground models, error estimates of the uncertainty of each foreground component, and provides an overall goodness-of-fit measurement. The resulting foreground maps are in broad agreement with those from previous techniques used both within the collaboration and by other authors. We find that for WMAP data, a simple model with power-law synchrotron, free-free, and thermal dust components fits 90% of the sky with a reduced X(sup 2) (sub v) of 1.14. However, the model does not work well inside the Galactic plane. The addition of either synchrotron steepening or a modified spinning dust model improves the fit. This component may account for up to 14% of the total flux at Ka-band (33 GHz). We find no evidence for foreground contamination of the CMB temperature map in the 85% of the sky used for cosmological analysis.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teitelbaum, Lawrence Paul
1992-04-01
We have measured the transverse momentum spectra 1/p T dN/dp T and rapidity distributions dN/dy of negatively charged hadrons and protons for central 32S + 32S interactions at 200 GeV/nucleon incident energy. The negative hadron dN/dy distribution is too broad to be accounted for by thermal models which demand isotropic particle emission. It is compatible with models which emphasize longitudinal dynamics, by either a particle production mechanism, as in the Lund fragmentation model, or by introducing one-dimensional hydrodynamic expansion, as in the Landau model. The proton dN/dy distribution, although showing no evidence for a peak in the target fragmentation region,more » exhibits limited nuclear stopping power. We estimate the mean rapidity shift of participant target protons to be Δy ~ 1.5, greater than observed for pp collisions, less than measured in central pA collisions, and much less than would be observed for a single equilibrated fireball at midrapidity. Both the negative hadron and proton dN/dy distributions can be fit by a symmetric Landau two-fireball model. Although the spectrum possesses a two-component structure, a comparison to pp data at comparable center-of-mass energy shows no evidence for enhanced production at low p T. The two-component structure can be explained by a thermal and chemical equilibrium model which takes into account the kinematics of resonance decay. Using an expression motivated by longitudinal expansion we find the same temperature for both the protons and negative hadrons at freezeout, T f ~ 170 MeV. We conclude that the charged particle spectra of negative hadrons and protons can be accommodated in a simple collision picture of limited nuclear stopping, evolution through a state of thermal equilibrium, followed by longitudinal hydrodynamic expansion until freezeout.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teitelbaum, L.P.
1992-04-01
We have measured the transverse momentum spectra 1/p[sub T] dN/dp[sub T] and rapidity distributions dN/dy of negatively charged hadrons and protons for central [sup 32]S + [sup 32]S interactions at 200 GeV/nucleon incident energy. The negative hadron dN/dy distribution is too broad to be accounted for by thermal models which demand isotropic particle emission. It is compatible with models which emphasize longitudinal dynamics, by either a particle production mechanism, as in the Lund fragmentation model, or by introducing one-dimensional hydrodynamic expansion, as in the Landau model. The proton dN/dy distribution, although showing no evidence for a peak in the targetmore » fragmentation region, exhibits limited nuclear stopping power. We estimate the mean rapidity shift of participant target protons to be [Delta]y [approximately] 1.5, greater than observed for pp collisions, less than measured in central pA collisions, and much less than would be observed for a single equilibrated fireball at midrapidity. Both the negative hadron and proton dN/dy distributions can be fit by a symmetric Landau two-fireball model. Although the spectrum possesses a two-component structure, a comparison to pp data at comparable center-of-mass energy shows no evidence for enhanced production at low p[sub T]. The two-component structure can be explained by a thermal and chemical equilibrium model which takes into account the kinematics of resonance decay. Using an expression motivated by longitudinal expansion we find the same temperature for both the protons and negative hadrons at freezeout, T[sub f] [approximately] 170 MeV. We conclude that the charged particle spectra of negative hadrons and protons can be accommodated in a simple collision picture of limited nuclear stopping, evolution through a state of thermal equilibrium, followed by longitudinal hydrodynamic expansion until freezeout.« less
Visual-Vestibular Responses During Space Flight
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Kozlovskaya, I. B.; Paloski, W. H.
1999-01-01
Given the documented disruptions that occur in spatial orientation during space flight and the putative sensory-motor information underlying eye and head spatial coding, the primary purpose of this paper is to examine components of the target acquisition system in subjects free to make head and eye movements in three dimensional space both during and following adaptation to long duration space flight. It is also our intention to suggest a simple model of adaptation that has components in common with cerebellar disorders whose neurobiological substrate has been identified.
Structured functional additive regression in reproducing kernel Hilbert spaces.
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2014-06-01
Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.
Configuration and Assessment of the GISS ModelE2 Contributions to the CMIP5 Archive
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Kelley, Max; Nazarenko, Larissa; Ruedy, Reto; Russell, Gary L.; Aleinov, Igor; Bauer, Mike; Bauer, Susanne E.; Bhat, Maharaj K.; Bleck, Rainer;
2014-01-01
We present a description of the ModelE2 version of the Goddard Institute for Space Studies (GISS) General Circulation Model (GCM) and the configurations used in the simulations performed for the Coupled Model Intercomparison Project Phase 5 (CMIP5). We use six variations related to the treatment of the atmospheric composition, the calculation of aerosol indirect effects, and ocean model component. Specifically, we test the difference between atmospheric models that have noninteractive composition, where radiatively important aerosols and ozone are prescribed from precomputed decadal averages, and interactive versions where atmospheric chemistry and aerosols are calculated given decadally varying emissions. The impact of the first aerosol indirect effect on clouds is either specified using a simple tuning, or parameterized using a cloud microphysics scheme. We also use two dynamic ocean components: the Russell and HYbrid Coordinate Ocean Model (HYCOM) which differ significantly in their basic formulations and grid. Results are presented for the climatological means over the satellite era (1980-2004) taken from transient simulations starting from the preindustrial (1850) driven by estimates of appropriate forcings over the 20th Century. Differences in base climate and variability related to the choice of ocean model are large, indicating an important structural uncertainty. The impact of interactive atmospheric composition on the climatology is relatively small except in regions such as the lower stratosphere, where ozone plays an important role, and the tropics, where aerosol changes affect the hydrological cycle and cloud cover. While key improvements over previous versions of the model are evident, these are not uniform across all metrics.
NASA Astrophysics Data System (ADS)
Peschmann, K. R.; Parker, D. L.; Smith, V.
1982-11-01
An abundant number of different CT scanner models has been developed in the past ten years, meeting increasing standards of performance. From the beginning they remained a comparatively expensive piece of equipment. This is due not only to their technical complexity but is also due to the difficulties involved in assessing "true" specifications (avoiding "overde-sign"). Our aim has been to provide, for Radiation Therapy Treatment Planning, a low cost CT scanner system featuring large freedom in patient positioning. We have taken advantage of the concurrent tremendously increased amount of knowledge and experience in the technical area of CT1 . By way of extensive computer simulations we gained confidence that an inexpensive C-arm simulator gantry and a simple one phase-two pulse generator in connection with a standard x-ray tube could be used, without sacrificing image quality. These components have been complemented by a commercial high precision shaft encoder, a simple and effective fan beam collimator, a high precision, high efficiency, luminescence crystal-silicon photodiode detector with 256 channels, low noise electronic preamplifier and sampling filter stages, a simplified data aquisition system furnished by Toshiba/ Analogic and an LSI 11/23 microcomputer plus data storage disk as well as various smaller interfaces linking the electrical components. The quality of CT scan pictures of phantoms,performed by the end of last year confirmed that this simple approach is working well. As a next step we intend to upgrade this system with an array processor in order to shorten recon-struction time to one minute per slice. We estimate that the system including this processor could be manufactured for a selling price of $210,000.
Bumblebees minimize control challenges by combining active and passive modes in unsteady winds
NASA Astrophysics Data System (ADS)
Ravi, Sridhar; Kolomenskiy, Dmitry; Engels, Thomas; Schneider, Kai; Wang, Chun; Sesterhenn, Jörn; Liu, Hao
2016-10-01
The natural wind environment that volant insects encounter is unsteady and highly complex, posing significant flight-control and stability challenges. It is critical to understand the strategies insects employ to safely navigate in natural environments. We combined experiments on free flying bumblebees with high-fidelity numerical simulations and lower-order modeling to identify the mechanics that mediate insect flight in unsteady winds. We trained bumblebees to fly upwind towards an artificial flower in a wind tunnel under steady wind and in a von Kármán street formed in the wake of a cylinder. Analysis revealed that at lower frequencies in both steady and unsteady winds the bees mediated lateral movement with body roll - typical casting motion. Numerical simulations of a bumblebee in similar conditions permitted the separation of the passive and active components of the flight trajectories. Consequently, we derived simple mathematical models that describe these two motion components. Comparison between the free-flying live and modeled bees revealed a novel mechanism that enables bees to passively ride out high-frequency perturbations while performing active maneuvers at lower frequencies. The capacity of maintaining stability by combining passive and active modes at different timescales provides a viable means for animals and machines to tackle the challenges posed by complex airflows.
NASA Astrophysics Data System (ADS)
Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.
2017-12-01
Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.
Trasi, Niraj S; Taylor, Lynne S
2015-08-01
There is increasing interest in formulating combination products that contain two or more drugs. Furthermore, it is also common for different drug products to be taken simultaneously. This raises the possibility of interactions between different drugs that may impact formulation performance. For poorly water-soluble compounds, the supersaturation behavior may be a critical factor in determining the extent of oral absorption. The goal of the current study was to evaluate the maximum achievable supersaturation for several poorly water-soluble compounds alone, and in combination. Model compounds included ritonavir, lopinavir, paclitaxel, felodipine, and diclofenac. The "amorphous solubility" for the pure drugs was determined using different techniques and the change in this solubility was then measured in the presence of differing amounts of a second drug. The results showed that "amorphous solubility" of each component in aqueous solution is substantially decreased by the second component, as long as the two drugs are miscible in the amorphous state. A simple thermodynamic model could be used to predict the changes in solubility as a function of composition. This information is of great value when developing co-amorphous or other supersaturating formulations and should contribute to a broader understanding of drug-drug physicochemical interactions in in vitro assays as well as in the gastrointestinal tract. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
A dynamically minimalist cognitive explanation of musical preference: is familiarity everything?
Schubert, Emery; Hargreaves, David J; North, Adrian C
2014-01-01
This paper examines the idea that attraction to music is generated at a cognitive level through the formation and activation of networks of interlinked "nodes." Although the networks involved are vast, the basic mechanism for activating the links is relatively simple. Two comprehensive cognitive-behavioral models of musical engagement are examined with the aim of identifying the underlying cognitive mechanisms and processes involved in musical experience. A "dynamical minimalism" approach (after Nowak, 2004) is applied to re-interpret musical engagement (listening, performing, composing, or imagining any of these) and to revise the latest version of the reciprocal-feedback model (RFM) of music processing. Specifically, a single cognitive mechanism of "spreading activation" through previously associated networks is proposed as a pleasurable outcome of musical engagement. This mechanism underlies the dynamic interaction of the various components of the RFM, and can thereby explain the generation of positive affects in the listener's musical experience. This includes determinants of that experience stemming from the characteristics of the individual engaging in the musical activity (whether listener, composer, improviser, or performer), the situation and contexts (e.g., social factors), and the music (e.g., genre, structural features). The theory calls for new directions for future research, two being (1) further investigation of the components of the RFM to better understand musical experience and (2) more rigorous scrutiny of common findings about the salience of familiarity in musical experience and preference.
Transport of Solar Wind Fluctuations: A Two-Component Model
NASA Technical Reports Server (NTRS)
Oughton, S.; Matthaeus, W. H.; Smith, C. W.; Breech, B.; Isenberg, P. A.
2011-01-01
We present a new model for the transport of solar wind fluctuations which treats them as two interacting incompressible components: quasi-two-dimensional turbulence and a wave-like piece. Quantities solved for include the energy, cross helicity, and characteristic transverse length scale of each component, plus the proton temperature. The development of the model is outlined and numerical solutions are compared with spacecraft observations. Compared to previous single-component models, this new model incorporates a more physically realistic treatment of fluctuations induced by pickup ions and yields improved agreement with observed values of the correlation length, while maintaining good observational accord with the energy, cross helicity, and temperature.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Applicability of ASHRAE clear-sky model based on solar-radiation measurements in Saudi Arabia
NASA Astrophysics Data System (ADS)
Abouhashish, Mohamed
2017-06-01
The constants of the ASHRAE clear sky model predict high values of the hourly beam radiation and very low values of the hourly diffuse radiation when used for locations in Saudi Arabia. Eight measurement stations in different locations are used to obtain new clearness factors for the model. The procedure depends on the comparison of monthly direct normal radiation (DNI) and diffuse horizontal radiation (DHI) between the measurement and the calculated values. Two factors are obtained CNb, CNd for every month to adjust the calculated clear sky radiation in order to consider the effects of local weather conditions. A simple and practical simulation model for solar geometry is designed using Microsoft Visual Basic platform, the model simulates the solar angles and radiation components according to ASHRAE model. The comparison of the calculated data with the first year of measurements indicate that the attenuation of site clearness is variable across the locations and from month to month, showing the clearest skies in the north and northwestern parts of the Kingdom especially during summer months.
Gas network model allows full reservoir coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Methnani, M.M.
The gas-network flow model (Gasnet) developed for and added to an existing Qatar General Petroleum Corp. (OGPC) in-house reservoir simulator, allows improved modeling of the interaction among the reservoir, wells, and pipeline networks. Gasnet is a three-phase model that is modified to handle gas-condensate systems. The numerical solution is based on a control volume scheme that uses the concept of cells and junctions, whereby pressure and phase densities are defined in cells, while phase flows are defined at junction links. The model features common numerical equations for the reservoir, the well, and the pipeline components and an efficient state-variable solutionmore » method in which all primary variables including phase flows are solved directly. Both steady-state and transient flow events can be simulated with the same tool. Three test cases show how the model runs. One case simulates flow redistribution in a simple two-branch gas network. The second simulates a horizontal gas well in a waterflooded gas reservoir. The third involves an export gas pipeline coupled to a producing reservoir.« less
Comparison between two photovoltaic module models based on transistors
NASA Astrophysics Data System (ADS)
Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel
2018-05-01
The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.
Improvement of a 2D numerical model of lava flows
NASA Astrophysics Data System (ADS)
Ishimine, Y.
2013-12-01
I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.
A two-component rain model for the prediction of attenuation statistics
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.
Multiphase flow in geometrically simple fracture intersections
Basagaoglu, H.; Meakin, P.; Green, C.T.; Mathew, M.; ,
2006-01-01
A two-dimensional lattice Boltzmann (LB) model with fluid-fluid and solid-fluid interaction potentials was used to study gravity-driven flow in geometrically simple fracture intersections. Simulated scenarios included fluid dripping from a fracture aperture, two-phase flow through intersecting fractures and thin-film flow on smooth and undulating solid surfaces. Qualitative comparisons with recently published experimental findings indicate that for these scenarios the LB model captured the underlying physics reasonably well.
Interactive, process-oriented climate modeling with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2016-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.
Diffusion models for corona formation in metagabbros from the Western Grenville Province, Canada
NASA Astrophysics Data System (ADS)
Grant, Shona M.
1988-01-01
Metagabbro bodies in SW Grenville Province display a variety of disequilibrium corona textures between spinel-clouded plagioclase and primary olivine or opaque oxide. Textural evidence favours a single-stage, subsolidus origin for the olivine coronas and diffusive mass transfer is believed to have been the rate-controlling process. Irreversible thermodynamics have been used to model two different garnet symplectite-bearing corona sequences in terms of steady state diffusion. In the models the flux of each component is related to the chemical potential gradients of all diffusing species by the Onsager or L-coefficients for diffusion. These coefficients are analogous to experimentally determined diffusion coefficients ( d), but relate the flux of components to chemical potential rather than concentration gradients. The major constraint on the relative values of Onsager coefficients comes from the observed mole fraction, X, of garnet in the symplectites; in (amph-gt) symplectites X {Gt/Sym}˜0.80, compared with ˜0.75 in (cpx-gt) symplectites. Several models using simple oxide components, and two different modifications of the reactant plagioclase composition, give the following qualitative results: the very low mobility of aluminium appears to control the rate of corona formation. Mg and Fe have similar mobility, and Mg can be up to 6 8 times more mobile than sodium. Determination of calcium mobility is problematical because of a proposed interaction with cross-coefficient terms reflecting “uphill” Ca-diffusion, i.e., calcium diffusing up its own chemical potential gradient. If these terms are not introduced, it is difficult to generate the required proportions of garnet in the symplectite. However, at moderate values of the cross-coefficient ratios, Mg can be up to 4 6 times more mobile than calcium ( L MgMg/LCaCa<4 6) and calcium must be 3 4 times more mobile than aluminium ( L CaCa/LAlAl>3).
Modeling Cross-Situational Word–Referent Learning: Prior Questions
Yu, Chen; Smith, Linda B.
2013-01-01
Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490
2013-01-01
Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294
Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B
2013-01-17
Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, T.
The exact equivalence between a bad-cavity laser with modulated inversion and a nonlinear oscillator in a Toda potential driven by an external modulation is presented. The dynamical properties of the laser system are investigated in detail by analyzing a Toda oscillator system. The temporal characteristics of the bad-cavity laser under strong modulation are analyzed extensively by numerically investigating the simpler Toda system as a function of two control parameters: the dc component of the population inversion and the modulation amplitude. The system exhibits two kinds of optical chaos: One is the quasiperiodic chaos in the region of the intermediate modulationmore » amplitude and the other is the intermittent kicked chaos in the region of strong modulation and large dc component of the pumping. The former is well described by a one-dimensional discrete map with a singular invariant probability measure. There are two types of onset of the chaos: quasiperiodic instability (continuous path to chaos) and catastrophic crisis (discontinuous path). The period-doubling cascade of bifurcation is also observed. The simple discrete model of the Toda system is presented to obtain analytically the one-dimensional map function and to understand the effect of the asymmetric potential curvature on yielding chaos.« less
NASA Technical Reports Server (NTRS)
Netzer, Hagai; Kaspi, Shai; Behar, Ehud; Brandt, W. N.; Chelouche, Doron; George, Ian M.; Crenshaw, D. Michael; Gabel, Jack R.; Hamann, Frederick W.; George, Steven B.
2003-01-01
We present a detailed analysis of the 900 ks spectrum of NGC3783 obtained by Chandra in 2000-2001 (Kaspi et al. 2002). We split the data in various ways to look for time dependent and luminosity dependent spectral variations. This analysis, the measured equivalent widths of a large number of X-ray lines, and our photoionization calculations, lead us to the following conclusions: 1) NGC 3783 fluctuated in luminosity, by a factor N 1.5, during individual 170 ks observations. The fluctuations were not associated with significant spectral variations. 2) On a longer time scale, of 20-120 days, we discovered two very different spectral shapes that are noted the high state and the low state spectra. The observed changes between the two can be described as the appearance and disappearance of a soft continuum component. The spectral variations are not related, in a simple way, to the brightening or the fading of the short wavelength continuum, as observed in other objects. NGC3783 seems to be the first AGN to show this unusual behavior. 3) The appearance of the soft continuum component is consistent with beeing the only spectral variation and there is no need to invoke changes in the absorber s opacity. In particular, all absorption lines with reliable measurements show the same equivalent width, within the observational uncertainties, during high and low states. 4) Photoionization model calculations show that a combination of three ionization components, each split into two kinematic components, explain very well the intensity of almost all absorption lines and the bound-free absorption. The components span a large range of ionization and a total column of about 3 x 10(exp 22) per square centimeter Moreover, all components are thermally stable and are situated on the vertical branch of the stability curve.. This means that they are in pressure equilibrium and perhaps occupy the same volume of space. This is the first detection of such a multi-component equilibrium gas in AGN. 5) The only real discrepancy between the model and the observations is the wavelength of the iron M-shell UTA feature. This is most likely due to an underestimation of the dielectronic recombination O VI and discuss its possible origin. 6) The lower limit on the distance of the absorbing gas in NGC3783 is between 0.2 and 3.2 pc, depending of the specific ionization component. The constant pressure assumption imposes an upper limit of about 25 pc on the distance of the least ionized gas from the central sourec.
Handbook of Research on Student Engagement
ERIC Educational Resources Information Center
Christenson, Sandra L., Ed.; Reschly, Amy L., Ed.; Wylie, Cathy, Ed.
2012-01-01
For more than two decades, the concept of student engagement has grown from simple attention in class to a construct comprised of cognitive, emotional, and behavioral components that embody and further develop motivation for learning. Similarly, the goals of student engagement have evolved from dropout prevention to improved outcomes for lifelong…
Fourier Analysis and the Rhythm of Conversation.
ERIC Educational Resources Information Center
Dabbs, James M., Jr.
Fourier analysis, a common technique in engineering, breaks down a complex wave form into its simple sine wave components. Communication researchers have recently suggested that this technique may provide an index of the rhythm of conversation, since vocalizing and pausing produce a complex wave form pattern of alternation between two speakers. To…
TRACING THE EVOLUTION OF HIGH-REDSHIFT GALAXIES USING STELLAR ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crosby, Brian D.; O’Shea, Brian W.; Beers, Timothy C.
2016-03-20
This paper presents the first results from a model for chemical evolution that can be applied to N-body cosmological simulations and quantitatively compared to measured stellar abundances from large astronomical surveys. This model convolves the chemical yield sets from a range of stellar nucleosynthesis calculations (including asymptotic giant branch stars, Type Ia and II supernovae, and stellar wind models) with a user-specified stellar initial mass function (IMF) and metallicity to calculate the time-dependent chemical evolution model for a “simple stellar population” (SSP) of uniform metallicity and formation time. These SSP models are combined with a semianalytic model for galaxy formation andmore » evolution that uses merger trees from N-body cosmological simulations to track several α- and iron-peak elements for the stellar and multiphase interstellar medium components of several thousand galaxies in the early (z ≥ 6) universe. The simulated galaxy population is then quantitatively compared to two complementary data sets of abundances in the Milky Way stellar halo and is capable of reproducing many of the observed abundance trends. The observed abundance ratio distributions are best reproduced with a Chabrier IMF, a chemically enriched star formation efficiency of 0.2, and a redshift of reionization of 7. Many abundances are qualitatively well matched by our model, but our model consistently overpredicts the carbon-enhanced fraction of stars at low metallicities, likely owing to incomplete coverage of Population III stellar yields and supernova models and the lack of dust as a component of our model.« less
Two simple models of classical heat pumps.
Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek
2007-03-01
Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.
2011-01-01
refinement of the vehicle body structure through quantitative assessment of stiffness and modal parameter changes resulting from modifications to the beam...differential placed on the axle , adjustment of the torque output to the opposite wheel may be required to obtain the correct solution. Thus...represented by simple inertial components with appropriate model connectivity instead to determine the free modal response of powertrain type
Chern-Simons gauge theory on orbifolds: Open strings from three dimensions
NASA Astrophysics Data System (ADS)
Hořava, Petr
1996-12-01
Chern-Simons gauge theory is formulated on three-dimensional Z2 orbifolds. The locus of singular points on a given orbifold is equivalent to a link of Wilson lines. This allows one to reduce any correlation function on orbifolds to a sum of more complicated correlation functions in the simpler theory on manifolds. Chern-Simons theory on manifolds is known to be related to two-dimensional (2D) conformal field theory (CFT) on closed-string surfaces; here it is shown that the theory on orbifolds is related to 2D CFT of unoriented closed- and open-string models, i.e. to worldsheet orbifold models. In particular, the boundary components of the worldsheet correspond to the components of the singular locus in the 3D orbifold. This correspondence leads to a simple identification of the open-string spectra, including their Chan-Paton degeneration, in terms of fusing Wilson lines in the corresponding Chern-Simons theory. The correspondence is studied in detail, and some exactly solvable examples are presented. Some of these examples indicate that it is natural to think of the orbifold group Z2 as a part of the gauge group of the Chern-Simons theory, thus generalizing the standard definition of gauge theories.
Simplified method to solve sound transmission through structures lined with elastic porous material.
Lee, J H; Kim, J
2001-11-01
An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.
Balkányi, László
2002-01-01
To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.
A simple and low-cost permanent magnet system for NMR.
Chonlathep, K; Sakamoto, T; Sugahara, K; Kondo, Y
2017-02-01
We have developed a simple, easy to build, and low-cost magnet system for NMR, of which homogeneity is about 4×10 -4 at 57mT, with a pair of two commercially available ferrite magnets. This homogeneity corresponds to about 90Hz spectral resolution at 2.45MHz of the hydrogen Larmor frequency. The material cost of this NMR magnet system is little more than $100. The components can be printed by a 3D printer. Copyright © 2016 Elsevier Inc. All rights reserved.
The declarative/procedural model of lexicon and grammar.
Ullman, M T
2001-01-01
Our use of language depends upon two capacities: a mental lexicon of memorized words and a mental grammar of rules that underlie the sequential and hierarchical composition of lexical forms into predictably structured larger words, phrases, and sentences. The declarative/procedural model posits that the lexicon/grammar distinction in language is tied to the distinction between two well-studied brain memory systems. On this view, the memorization and use of at least simple words (those with noncompositional, that is, arbitrary form-meaning pairings) depends upon an associative memory of distributed representations that is subserved by temporal-lobe circuits previously implicated in the learning and use of fact and event knowledge. This "declarative memory" system appears to be specialized for learning arbitrarily related information (i.e., for associative binding). In contrast, the acquisition and use of grammatical rules that underlie symbol manipulation is subserved by frontal/basal-ganglia circuits previously implicated in the implicit (nonconscious) learning and expression of motor and cognitive "skills" and "habits" (e.g., from simple motor acts to skilled game playing). This "procedural" system may be specialized for computing sequences. This novel view of lexicon and grammar offers an alternative to the two main competing theoretical frameworks. It shares the perspective of traditional dual-mechanism theories in positing that the mental lexicon and a symbol-manipulating mental grammar are subserved by distinct computational components that may be linked to distinct brain structures. However, it diverges from these theories where they assume components dedicated to each of the two language capacities (that is, domain-specific) and in their common assumption that lexical memory is a rote list of items. Conversely, while it shares with single-mechanism theories the perspective that the two capacities are subserved by domain-independent computational mechanisms, it diverges from them where they link both capacities to a single associative memory system with broad anatomic distribution. The declarative/procedural model, but neither traditional dual- nor single-mechanism models, predicts double dissociations between lexicon and grammar, with associations among associative memory properties, memorized words and facts, and temporal-lobe structures, and among symbol-manipulation properties, grammatical rule products, motor skills, and frontal/basal-ganglia structures. In order to contrast lexicon and grammar while holding other factors constant, we have focused our investigations of the declarative/procedural model on morphologically complex word forms. Morphological transformations that are (largely) unproductive (e.g., in go-went, solemn-solemnity) are hypothesized to depend upon declarative memory. These have been contrasted with morphological transformations that are fully productive (e.g., in walk-walked, happy-happiness), whose computation is posited to be solely dependent upon grammatical rules subserved by the procedural system. Here evidence is presented from studies that use a range of psycholinguistic and neurolinguistic approaches with children and adults. It is argued that converging evidence from these studies supports the declarative/procedural model of lexicon and grammar.
Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model
NASA Astrophysics Data System (ADS)
Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan
2016-04-01
A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.
NASA Astrophysics Data System (ADS)
Schneider, Sandra; Prijs, Vera F.; Schoonhoven, Ruurd
2003-06-01
Lower sideband distortion product otoacoustic emissions (DPOAEs), measured in the ear canal upon stimulation with two continuous pure tones, are the result of interfering contributions from two different mechanisms, the nonlinear distortion component and the linear reflection component. The two contributors have been shown to have a different amplitude and, in particular, a different phase behavior as a function of the stimulus frequencies. The dominance of either component was investigated in an extensive (f1,f2) area study of DPOAE amplitude and phase in the guinea pig, which allows for both qualitative and quantitative analysis of isophase contours. Making a minimum of additional assumptions, simple relations between the direction of constant phase in the (f1,f2) plane and the group delays in f1-sweep, f2-sweep, and fixed f2/f1 paradigms can be derived, both for distortion (wave-fixed) and reflection (place-fixed) components. The experimental data indicate the presence of both components in the lower sideband DPOAEs, with the reflection component as the dominant contributor for low f2/f1 ratios and the distortion component for intermediate ratios. At high ratios the behavior cannot be explained by dominance of either component.
Method for producing hard-surfaced tools and machine components
McHargue, Carl J.
1985-01-01
In one aspect, the invention comprises a method for producing tools and machine components having superhard crystalline-ceramic work surfaces. Broadly, the method comprises two steps: A tool or machine component having a ceramic near-surface region is mounted in ion-implantation apparatus. The region then is implanted with metal ions to form, in the region, a metastable alloy of the ions and said ceramic. The region containing the alloy is characterized by a significant increase in hardness properties, such as microhardness, fracture-toughness, and/or scratch-resistance. The resulting improved article has good thermal stability at temperatures characteristic of typical tool and machine-component uses. The method is relatively simple and reproducible.
The ASCA PV phase observation of FO Aquarii
NASA Technical Reports Server (NTRS)
Mukai, Koji; Ishida, Manabu; Osborne, Julian P.
1994-01-01
We report on a approximately 1-day Advanced Satellite for Cosmology and Astrophysics (ASCA) observation of the intermediate polar FO Aquarii. We find two distinctive spectral components, one unabsorbed and the other strongly absorbed; the observed 2-10 keV flux severely underestimates the total system luminosity, due to this strong absorption intrinsic to the binary. The absorbed component is dominant in terms of luminosity, and its light curve is simple. The unabsorbed component accounts for approximately 2% of the luminosity, and shows a much more complicated light curve. As the dominant component predominantly shows a sinusoidal modulation at the white dwarf spin period, it provides a strong evidence for a partial accretion disk in the system.
Method for producing hard-surfaced tools and machine components
McHargue, C.J.
1981-10-21
In one aspect, the invention comprises a method for producing tools and machine components having superhard crystalline-ceramic work surfaces. Broadly, the method comprises two steps: a tool or machine component having a ceramic near-surface region is mounted in ion-implantation apparatus. The region then is implanted with metal ions to form, in the region, a metastable alloy of the ions and said ceramic. The region containing the alloy is characterized by a significant increase in hardness properties, such as microhardness, fracture-toughness, and/or scratch-resistance. The resulting improved article has good thermal stability at temperatures characteristic of typical tool and machine-component uses. The method is relatively simple and reproducible.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
On the Mechanisms for Martensite Formation in YAG Laser Welded Austenitic NiTi
NASA Astrophysics Data System (ADS)
Oliveira, J. P.; Braz Fernandes, F. M.; Miranda, R. M.; Schell, N.
2016-03-01
Extensive work has been reported on the microstructure of laser-welded NiTi alloys either superelastic or with shape memory effect, motivated by the fact that the microstructure affects the functional properties. However, some effects of laser beam/material interaction with these alloys have not yet been discussed. This paper aims to discuss the mechanisms for the occurrence of martensite in the heat-affected zone and in the fusion zone at room temperature, while the base material is fully austenitic. For this purpose, synchrotron radiation was used together with a simple thermal analytic mathematical model. Two distinct mechanisms are proposed for the presence of martensite in different zones of a weld, which affects the mechanical and functional behavior of a welded component.
Self-regulating galaxy formation. Part 1: HII disk and Lyman alpha pressure
NASA Technical Reports Server (NTRS)
Cox, D. P.
1983-01-01
Assuming a simple but physically based prototype for behavior of interstellar material during formation of a disk galaxy, coupled with the lowest order description of infall, a scenario is developed for self-regulated disk galaxy formation. Radiation pressure, particularly that of Lyman depha (from fluorescence conversion Lyman continuum), is an essential component, maintaining an inflated disk and stopping infall when only a small fraction of the overall perturbation has joined the disk. The resulting galaxies consist of a two dimensional family whose typical scales and surface density are expressable in terms of fundamental constants. The model leads naturally to galaxies with a rich circumgalactic environment and flat rotation curves (but is weak in its analysis of the subsequent evolution of halo material).
Bioturbation, advection, and diffusion of a conserved tracer in a laboratory flume
NASA Astrophysics Data System (ADS)
Work, P. A.; Moore, P. R.; Reible, D. D.
2002-06-01
Laboratory experiments indicating the relative influences of advection, diffusion, and bioturbation on transport of NaCl tracer between a stream and streambed are described. Data were collected in a recirculating flume housing a box filled with test sediments. Peclet numbers ranged from 0 to 1.5. Sediment components included a medium sand (d50 = 0.31 mm), kaolinite, and topsoil. Lumbriculus variegatus were introduced as bioturbators. Conductivity probes were employed to document the flux of the tracer solution out of the bed. Measurements are compared to one-dimensional effective diffusion models assuming one or two horizontal sediment layers. These simple models provide a good indication of tracer half-life in the bed if a suitable effective diffusion coefficient is chosen but underpredict initial flux and overpredict flux at long times. Organism activity was limited to the upper reaches of the sediment test box but eventually exerts a secondary influence on flux from deeper regions.
Measurements and predictions of a liquid spray from an air-assist nozzle
NASA Technical Reports Server (NTRS)
Bulzan, Daniel L.; Levy, Yeshayahou; Aggarwal, Suresh K.; Chitre, Susheel
1991-01-01
Droplet size and gas velocity were measured in a water spray using a two-component Phase/Doppler Particle Analyzer. A complete set of measurements was obtained at axial locations from 5 to 50 cm downstream of the nozzle. The nozzle used was a simple axisymmetric air-assist nozzle. The sprays produced, using the atomizer, were extremely fine. Sauter mean diameters were less than 20 microns at all locations. Measurements were obtained for droplets ranging from 1 to 50 microns. The gas phase was seeded with micron sized droplets, and droplets having diameters of 1.4 microns and less were used to represent gas-phase properties. Measurements were compared with predictions from a multi-phase computer model. Initial conditions for the model were taken from measurements at 5 cm downstream. Predictions for both the gas phase and the droplets showed relatively good agreement with the measurements.
Modes and emergent time scales of embayed beach dynamics
NASA Astrophysics Data System (ADS)
Ratliff, Katherine M.; Murray, A. Brad
2014-10-01
In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.
NASA Astrophysics Data System (ADS)
Siripatana, Chairat; Thongpan, Hathaikarn; Promraksa, Arwut
2017-03-01
This article explores a volumetric approach in formulating differential equations for a class of engineering flow problems involving component transfer within or between two phases. In contrast to conventional formulation which is based on linear velocities, this work proposed a slightly different approach based on volumetric flow-rate which is essentially constant in many industrial processes. In effect, many multi-dimensional flow problems found industrially can be simplified into multi-component or multi-phase but one-dimensional flow problems. The formulation is largely generic, covering counter-current, concurrent or batch, fixed and fluidized bed arrangement. It was also intended to use for start-up, shut-down, control and steady state simulation. Since many realistic and industrial operation are dynamic with variable velocity and porosity in relation to position, analytical solutions are rare and limited to only very simple cases. Thus we also provide a numerical solution using Crank-Nicolson finite difference scheme. This solution is inherently stable as tested against a few cases published in the literature. However, it is anticipated that, for unconfined flow or non-constant flow-rate, traditional formulation should be applied.
Macroscopic and microscopic components of exchange-correlation interactions
NASA Astrophysics Data System (ADS)
Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.
2003-11-01
We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.
Liu, Yingyi; Sun, Huilin; Lin, Dan; Li, Hong; Yeung, Susanna Siu-Sze; Wong, Terry Tin-Yau
2018-01-15
Word reading and linguistic comprehension skills are two crucial components in reading comprehension, according to the Simple View of Reading (SVR). Some researchers have posited that a third component should be involved in reading and understanding texts, namely executive function (EF) skills. This study was novel in two ways. Not only did we tested EF skills as a predictor of reading comprehension in a non-alphabetic language (i.e., Chinese) to extend the theoretical model of SVR, we also examined reading comprehension further in kindergarten children (age 5) in Hong Kong, in the attempt to reveal possible early precursors of reading comprehension. A group of 170 K3 kindergarteners was recruited in Hong Kong. Children's word reading was assessed. Their linguistic comprehension was assessed with phonological awareness, verbal short-term memory, and vocabulary knowledge. Using a structured observation task, Head-Toes-Knees-Shoulders (HTKS), we measured their composite scores for EF skills. Head-Toes-Knees-Shoulders performance predicted unique variance in children's Chinese reading comprehension concurrently beyond word reading and a set of linguistic comprehension skills. The results highlight the important role of EF skills in beginning readers' reading comprehension. © 2018 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Lynn; Perkins, Curtis; Smith, Aaron
The next wave of LED lighting technology is likely to be tunable white lighting (TWL) devices which can adjust the colour of the emitted light between warm white (~ 2700 K) and cool white (~ 6500 K). This type of lighting system uses LED assemblies of two or more colours each controlled by separate driver channels that independently adjust the current levels to achieve the desired lighting colour. Drivers used in TWL devices are inherently more complex than those found in simple SSL devices, due to the number of electrical components in the driver required to achieve this level ofmore » control. The reliability of such lighting systems can only be studied using accelerated stress tests (AST) that accelerate the aging process to time frames that can be accommodated in laboratory testing. This paper describes AST methods and findings developed from AST data that provide insights into the lifetime of the main components of one-channel and multi-channel LED devices. The use of AST protocols to confirm product reliability is necessary to ensure that the technology can meet the performance and lifetime requirements of the intended application.« less
Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick
2012-05-01
When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.
Integrated two-cylinder liquid piston Stirling engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ning; Rickard, Robert; Pluckter, Kevin
2014-10-06
Heat engines utilizing the Stirling cycle may run on low temperature differentials with the capacity to function at high efficiency due to their near-reversible operation. However, current approaches to building Stirling engines are laborious and costly. Typically the components are assembled by hand and additional components require a corresponding increase in manufacturing complexity, akin to electronics before the integrated circuit. We present a simple and integrated approach to fabricating Stirling engines with precisely designed cylinders. We utilize computer aided design and one-step, planar machining to form all components of the engine. The engine utilizes liquid pistons and displacers to harnessmore » useful work from heat absorption and rejection. As a proof of principle of the integrated design, a two-cylinder engine is produced and characterized and liquid pumping is demonstrated.« less
Integrated two-cylinder liquid piston Stirling engine
NASA Astrophysics Data System (ADS)
Yang, Ning; Rickard, Robert; Pluckter, Kevin; Sulchek, Todd
2014-10-01
Heat engines utilizing the Stirling cycle may run on low temperature differentials with the capacity to function at high efficiency due to their near-reversible operation. However, current approaches to building Stirling engines are laborious and costly. Typically the components are assembled by hand and additional components require a corresponding increase in manufacturing complexity, akin to electronics before the integrated circuit. We present a simple and integrated approach to fabricating Stirling engines with precisely designed cylinders. We utilize computer aided design and one-step, planar machining to form all components of the engine. The engine utilizes liquid pistons and displacers to harness useful work from heat absorption and rejection. As a proof of principle of the integrated design, a two-cylinder engine is produced and characterized and liquid pumping is demonstrated.
A commentary on the Atlantic meridional overturning circulation stability in climate models
NASA Astrophysics Data System (ADS)
Gent, Peter R.
2018-02-01
The stability of the Atlantic meridional overturning circulation (AMOC) in ocean models depends quite strongly on the model formulation, especially the vertical mixing, and whether it is coupled to an atmosphere model. A hysteresis loop in AMOC strength with respect to freshwater forcing has been found in several intermediate complexity climate models and in one fully coupled climate model that has very coarse resolution. Over 40% of modern climate models are in a bistable AMOC state according to the very frequently used simple stability criterion which is based solely on the sign of the AMOC freshwater transport across 33° S. In a recent freshwater hosing experiment in a climate model with an eddy-permitting ocean component, the change in the gyre freshwater transport across 33° S is larger than the AMOC freshwater transport change. This casts very strong doubt on the usefulness of this simple AMOC stability criterion. If a climate model uses large surface flux adjustments, then these adjustments can interfere with the atmosphere-ocean feedbacks, and strongly change the AMOC stability properties. AMOC can be shut off for many hundreds of years in modern fully coupled climate models if the hosing or carbon dioxide forcing is strong enough. However, in one climate model the AMOC recovers after between 1000 and 1400 years. Recent 1% increasing carbon dioxide runs and RCP8.5 future scenario runs have shown that the AMOC reduction is smaller using an eddy-resolving ocean component than in the comparable standard 1° ocean climate models.
A Synthetical Two-Component Model with Peakon Solutions: One More Bi-Hamiltonian Case
NASA Astrophysics Data System (ADS)
Mengxia, Zhang; Xiaomin, Yang
2018-05-01
Compatible pairs of Hamiltonian operators for the synthetical two-component model of Xia, Qiao, and Zhou are derived systematically by means of the spectral gradient method. A new two-component system, which is bi-Hamiltonian, is presented. For this new system, the construction of its peakon solutions is considered.
Comparative analysis of model behaviour for flood prediction purposes using Self-Organizing Maps
NASA Astrophysics Data System (ADS)
Herbst, M.; Casper, M. C.; Grundmann, J.; Buchholz, O.
2009-03-01
Distributed watershed models constitute a key component in flood forecasting systems. It is widely recognized that models because of their structural differences have varying capabilities of capturing different aspects of the system behaviour equally well. Of course, this also applies to the reproduction of peak discharges by a simulation model which is of particular interest regarding the flood forecasting problem. In our study we use a Self-Organizing Map (SOM) in combination with index measures which are derived from the flow duration curve in order to examine the conditions under which three different distributed watershed models are capable of reproducing flood events present in the calibration data. These indices are specifically conceptualized to extract data on the peak discharge characteristics of model output time series which are obtained from Monte-Carlo simulations with the distributed watershed models NASIM, LARSIM and WaSIM-ETH. The SOM helps to analyze this data by producing a discretized mapping of their distribution in the index space onto a two dimensional plane such that their pattern and consequently the patterns of model behaviour can be conveyed in a comprehensive manner. It is demonstrated how the SOM provides useful information about details of model behaviour and also helps identifying the model parameters that are relevant for the reproduction of peak discharges and thus for flood prediction problems. It is further shown how the SOM can be used to identify those parameter sets from among the Monte-Carlo data that most closely approximate the peak discharges of a measured time series. The results represent the characteristics of the observed time series with partially superior accuracy than the reference simulation obtained by implementing a simple calibration strategy using the global optimization algorithm SCE-UA. The most prominent advantage of using SOM in the context of model analysis is that it allows to comparatively evaluating the data from two or more models. Our results highlight the individuality of the model realizations in terms of the index measures and shed a critical light on the use and implementation of simple and yet too rigorous calibration strategies.
Self-organized huddles of rat pups modeled by simple rules of individual behavior.
Schank, J C; Alberts, J R
1997-11-07
Starting at infancy and continuing throughout adult life, huddling is a major component of the behavioral repertoire of Norway rats (Rattus norvegicus). Huddling behavior maintains the cohesion of litters throughout early life, and in adulthood, it remains a consistent feature of social behavior of R. norvegicus. During infancy, rats have severely limited sensorimotor capabilities, and yet they are capable of aggregating and display a form of group regulatory behavior that conserves metabolic effort and augments body temperature regulation. The functions of huddling are generally understood as group adaptations, which are beyond the capabilities of the individual infant rat. We show, however, that huddling as aggregative or cohesive behavior can emerge as a self-organizing process from autonomous individuals following simple sensorimotor rules. In our model, two sets of sensorimotor parameters characterize the topotaxic responses and the dynamics of contact in 7-day-old rats. The first set of parameters are conditional probabilities of activity and inactivity given prior activity or inactivity and the second set are preferences for objects in the infant rat's environment. We found that the behavior of the model and of actual rat pups compare very favorably, demonstrating that the aggregative feature of huddling can emerge from the local sensorimotor interactions of individuals, and that complex group regulatory behaviors in infant rats may also emerge from self-organizing processes. We discuss the model and the underlying approach as a paradigm for investigating the dynamics of social interactions, group behavior, and developmental change.
Towards a voxel-based geographic automata for the simulation of geospatial processes
NASA Astrophysics Data System (ADS)
Jjumba, Anthony; Dragićević, Suzana
2016-07-01
Many geographic processes evolve in a three dimensional space and time continuum. However, when they are represented with the aid of geographic information systems (GIS) or geosimulation models they are modelled in a framework of two-dimensional space with an added temporal component. The objective of this study is to propose the design and implementation of voxel-based automata as a methodological approach for representing spatial processes evolving in the four-dimensional (4D) space-time domain. Similar to geographic automata models which are developed to capture and forecast geospatial processes that change in a two-dimensional spatial framework using cells (raster geospatial data), voxel automata rely on the automata theory and use three-dimensional volumetric units (voxels). Transition rules have been developed to represent various spatial processes which range from the movement of an object in 3D to the diffusion of airborne particles and landslide simulation. In addition, the proposed 4D models demonstrate that complex processes can be readily reproduced from simple transition functions without complex methodological approaches. The voxel-based automata approach provides a unique basis to model geospatial processes in 4D for the purpose of improving representation, analysis and understanding their spatiotemporal dynamics. This study contributes to the advancement of the concepts and framework of 4D GIS.
An Economic Assessment of Electronic Text. Report Number Six of the Electronic Text Report Series.
ERIC Educational Resources Information Center
Carey, John
This report outlines economic components that will eventually contribute to large models of electronic text services in institutions of higher education, and provides a simple and practical assessment of economic issues associated with electronic text for college administrators, faculty, and planners. This assessment constitutes a layman's guide…
USDA-ARS?s Scientific Manuscript database
Net Primary Production (NPP), the difference between CO2 fixed by photosynthesis and CO2 lost to autotrophic respiration, is one of the most important components of the carbon cycle. Our goal was to develop a simple regression model to estimate global NPP using climate and land cover data. Approxima...
ERIC Educational Resources Information Center
Leu, Donald J.
The author provides an overview and conceptualization of the total educational and educational facility planning process. The presentation attempts to provide a simple practical outline for local planners, so they may actively engage in relevant educational facility planning, and a common conceptual base, so the various components of Project…
Conceptual Frameworks in Undergraduate Nursing Curricula: Report of a National Survey.
ERIC Educational Resources Information Center
McEwen, Melanie; Brown, Sandra C.
2002-01-01
Responses from 300 accredited nursing schools indicated that they used eclectic conceptual frameworks for curriculum; the most common component was the nursing process. Associate degree programs were more likely to use simple-to-complex organization. Diploma programs were more likely to use the medical model than baccalaureate programs. Frameworks…
Regional evaluation of evapotranspiration in the Everglades
German, Edward R.
1996-01-01
Understanding the water budget of the Everglades system is crucial to the success of restoration and management strategies. Although the water budget is simple in concept, it is difficult to assess quantitatively. Models used to simulate changes in water levels and vegetation resulting from management strategies need to accurately simulate all components of the water budget.
Data Acquisition Programming (LabVIEW): An Aid to Teaching Instrumental Analytical Chemistry.
ERIC Educational Resources Information Center
Gostowski, Rudy
A course was developed at Austin Peay State University (Tennessee) which offered an opportunity for hands-on experience with the essential components of modern analytical instruments. The course aimed to provide college students with the skills necessary to construct a simple model instrument, including the design and fabrication of electronic…
Modeling the Compact Disc Read System in Lab
ERIC Educational Resources Information Center
Hinaus, Brad; Veum, Mick
2009-01-01
One of the great, engaging aspects of physics is its application to everyday technology. The compact disc player is an example of one such technology that applies fundamental principles from optics in order to efficiently store and quickly retrieve information. We have created a lab in which students use simple optical components to assemble a…
Operational models of infrastructure resilience.
Alderson, David L; Brown, Gerald G; Carlyle, W Matthew
2015-04-01
We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of "operational resilience" that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience. © 2014 Society for Risk Analysis.
Simple graph models of information spread in finite populations
Voorhees, Burton; Ryder, Bergerud
2015-01-01
We consider several classes of simple graphs as potential models for information diffusion in a structured population. These include biases cycles, dual circular flows, partial bipartite graphs and what we call ‘single-link’ graphs. In addition to fixation probabilities, we study structure parameters for these graphs, including eigenvalues of the Laplacian, conductances, communicability and expected hitting times. In several cases, values of these parameters are related, most strongly so for partial bipartite graphs. A measure of directional bias in cycles and circular flows arises from the non-zero eigenvalues of the antisymmetric part of the Laplacian and another measure is found for cycles as the value of the transition probability for which hitting times going in either direction of the cycle are equal. A generalization of circular flow graphs is used to illustrate the possibility of tuning edge weights to match pre-specified values for graph parameters; in particular, we show that generalizations of circular flows can be tuned to have fixation probabilities equal to the Moran probability for a complete graph by tuning vertex temperature profiles. Finally, single-link graphs are introduced as an example of a graph involving a bottleneck in the connection between two components and these are compared to the partial bipartite graphs. PMID:26064661
Two-trait-locus linkage analysis: A powerful strategy for mapping complex genetic traits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schork, N.J.; Boehnke, M.; Terwilliger, J.D.
1993-11-01
Nearly all diseases mapped to date follow clear Mendelian, single-locus segregation patterns. In contrast, many common familial diseases such as diabetes, psoriasis, several forms of cancer, and schizophrenia are familial and appear to have a genetic component but do not exhibit simple Mendelian transmission. More complex models are required to explain the genetics of these important diseases. In this paper, the authors explore two-trait-locus, two-marker-locus linkage analysis in which two trait loci are mapped simultaneously to separate genetic markers. The authors compare the utility of this approach to standard one-trait-locus, one-marker-locus linkage analysis with and without allowance for heterogeneity. Themore » authors also compare the utility of the two-trait-locus, two-marker-locus analysis to two-trait-locus, one-marker-locus linkage analysis. For common diseases, pedigrees are often bilineal, with disease genes entering via two or more unrelated pedigree members. Since such pedigrees often are avoided in linkage studies, the authors also investigate the relative information content of unilineal and bilineal pedigrees. For the dominant-or-recessive and threshold models that the authors consider, the authors find that two-trait-locus, two-marker-locus linkage analysis can provide substantially more linkage information, as measured by expected maximum lod score, than standard one-trait-locus, one-marker-locus methods, even allowing for heterogeneity, while, for a dominant-or-dominant generating model, one-locus models that allow for heterogeneity extract essentially as much information as the two-trait-locus methods. For these three models, the authors also find that bilineal pedigrees provide sufficient linkage information to warrant their inclusion in such studies. The authors discuss strategies for assessing the significance of the two linkages assumed in two-trait-locus, two-marker-locus models. 37 refs., 1 fig., 4 tabs.« less
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
A Cost-Utility Model of Care for Peristomal Skin Complications
Inglese, Gary; Manson, Andrea; Townshend, Arden
2016-01-01
PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166
Laser-induced breakdown spectroscopy is a reliable method for urinary stone analysis
Mutlu, Nazım; Çiftçi, Seyfettin; Gülecen, Turgay; Öztoprak, Belgin Genç; Demir, Arif
2016-01-01
Objective We compared laser-induced breakdown spectroscopy (LIBS) with the traditionally used and recommended X-ray diffraction technique (XRD) for urinary stone analysis. Material and methods In total, 65 patients with urinary calculi were enrolled in this prospective study. Stones were obtained after surgical or extracorporeal shockwave lithotripsy procedures. All stones were divided into two equal pieces. One sample was analyzed by XRD and the other by LIBS. The results were compared by the kappa (κ) and Spearman’s correlation coefficient (rho) tests. Results Using LIBS, 95 components were identified from 65 stones, while XRD identified 88 components. LIBS identified 40 stones with a single pure component, 20 stones with two different components, and 5 stones with three components. XRD demonstrated 42 stones with a single component, 22 stones with two different components, and only 1 stone with three different components. There was a strong relationship in the detection of stone types between LIBS and XRD for stones components (Spearman rho, 0.866; p<0.001). There was excellent agreement between the two techniques among 38 patients with pure stones (κ index, 0.910; Spearman rho, 0.916; p<0.001). Conclusion Our study indicates that LIBS is a valid and reliable technique for determining urinary stone composition. Moreover, it is a simple, low-cost, and nondestructive technique. LIBS can be safely used in routine daily practice if our results are supported by studies with larger numbers of patients. PMID:27011877
Experimental and computational prediction of glass transition temperature of drugs.
Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S
2014-12-22
Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.
NASA Technical Reports Server (NTRS)
Butner, Harold M.
1999-01-01
Our understanding about the inter-relationship between the collapsing cloud envelope and the disk has been greatly altered. While the dominant star formation models invoke free fall collapse and r(sup -1.5) density profile, other star formation models are possible. These models invoke either different cloud starting conditions or the mediating effects of magnetic fields to alter the cloud geometry during collapse. To test these models, it is necessary to understand the envelope's physical structure. The discovery of disks, based on millimeter observations around young stellar objects, however makes a simple interpretation of the emission complicated. Depending on the wavelength, the disk or the envelope could dominate emission from a star. In addition, the discovery of planets around other stars has made understanding the disks in their own right quite important. Many star formation models predict disks should form naturally as the star is forming. In many cases, the information we derive about disk properties depends implicitly on the assumed envelope properties. How to understand the two components and their interaction with each other is a key problem of current star formation.
NASA Astrophysics Data System (ADS)
Choi, Yong Seok; Kang, Dal Mo
2014-12-01
Thermal management has been one of the major issues in developing a lithium-ion (Li-ion) hybrid electric vehicle (HEV) battery system since the Li-ion battery is vulnerable to excessive heat load under abnormal or severe operational conditions. In this work, in order to design a suitable thermal management system, a simple modeling methodology describing thermal behavior of an air-cooled Li-ion battery system was proposed from vehicle components designer's point of view. A proposed mathematical model was constructed based on the battery's electrical and mechanical properties. Also, validation test results for the Li-ion battery system were presented. A pulse current duty and an adjusted US06 current cycle for a two-mode HEV system were used to validate the accuracy of the model prediction. Results showed that the present model can give good estimations for simulating convective heat transfer cooling during battery operation. The developed thermal model is useful in structuring the flow system and determining the appropriate cooling capacity for a specified design prerequisite of the battery system.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.
Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A
2015-07-01
Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
X-Ray Observations of Magnetar SGR 0501+4516 from Outburst to Quiescence
NASA Astrophysics Data System (ADS)
Mong, Y.-L.; Ng, C.-Y.
2018-01-01
Magnetars are neutron stars having extreme magnetic field strengths. Study of their emission properties in quiescent state can help understand effects of a strong magnetic field on neutron stars. SGR 0501+4516 is a magnetar that was discovered in 2008 during an outburst, which has recently returned to quiescence. We report its spectral and timing properties measured with new and archival observations from the Chandra X-ray Observatory, XMM-Newton, and Suzaku. We found that the quiescent spectrum is best fit by a power-law plus two blackbody model, with temperatures of kT low ∼ 0.26 keV and kT high ∼ 0.62 keV. We interpret these two blackbody components as emission from a hotspot and the entire surface. The hotspot radius shrunk from 1.4 km to 0.49 km since the outburst, and there was a significant correlation between its area and the X-ray luminosity, which agrees well with the prediction by the twisted magnetosphere model. We applied the two-temperature spectral model to all magnetars in quiescence and found that it could be a common feature among the population. Moreover, the temperature of the cooler blackbody shows a general trend with the magnetar field strength, which supports the simple scenario of heating by magnetic field decay.
O'Reilly, Andrew M.
2004-01-01
A relatively simple method is needed that provides estimates of transient ground-water recharge in deep water-table settings that can be incorporated into other hydrologic models. Deep water-table settings are areas where the water table is below the reach of plant roots and virtually all water that is not lost to surface runoff, evaporation at land surface, or evapotranspiration in the root zone eventually becomes ground-water recharge. Areas in central Florida with a deep water table generally are high recharge areas; consequently, simulation of recharge in these areas is of particular interest to water-resource managers. Yet the complexities of meteorological variations and unsaturated flow processes make it difficult to estimate short-term recharge rates, thereby confounding calibration and predictive use of transient hydrologic models. A simple water-balance/transfer-function (WBTF) model was developed for simulating transient ground-water recharge in deep water-table settings. The WBTF model represents a one-dimensional column from the top of the vegetative canopy to the water table and consists of two components: (1) a water-balance module that simulates the water storage capacity of the vegetative canopy and root zone; and (2) a transfer-function module that simulates the traveltime of water as it percolates from the bottom of the root zone to the water table. Data requirements include two time series for the period of interest?precipitation (or precipitation minus surface runoff, if surface runoff is not negligible) and evapotranspiration?and values for five parameters that represent water storage capacity or soil-drainage characteristics. A limiting assumption of the WBTF model is that the percolation of water below the root zone is a linear process. That is, percolating water is assumed to have the same traveltime characteristics, experiencing the same delay and attenuation, as it moves through the unsaturated zone. This assumption is more accurate if the moisture content, and consequently the unsaturated hydraulic conductivity, below the root zone does not vary substantially with time. Results of the WBTF model were compared to those of the U.S. Geological Survey variably saturated flow model, VS2DT, and to field-based estimates of recharge to demonstrate the applicability of the WBTF model for a range of conditions relevant to deep water-table settings in central Florida. The WBTF model reproduced independently obtained estimates of recharge reasonably well for different soil types and water-table depths.
CEREF: A hybrid data-driven model for forecasting annual streamflow from a socio-hydrological system
NASA Astrophysics Data System (ADS)
Zhang, Hongbo; Singh, Vijay P.; Wang, Bin; Yu, Yinghao
2016-09-01
Hydrological forecasting is complicated by flow regime alterations in a coupled socio-hydrologic system, encountering increasingly non-stationary, nonlinear and irregular changes, which make decision support difficult for future water resources management. Currently, many hybrid data-driven models, based on the decomposition-prediction-reconstruction principle, have been developed to improve the ability to make predictions of annual streamflow. However, there exist many problems that require further investigation, the chief among which is the direction of trend components decomposed from annual streamflow series and is always difficult to ascertain. In this paper, a hybrid data-driven model was proposed to capture this issue, which combined empirical mode decomposition (EMD), radial basis function neural networks (RBFNN), and external forces (EF) variable, also called the CEREF model. The hybrid model employed EMD for decomposition and RBFNN for intrinsic mode function (IMF) forecasting, and determined future trend component directions by regression with EF as basin water demand representing the social component in the socio-hydrologic system. The Wuding River basin was considered for the case study, and two standard statistical measures, root mean squared error (RMSE) and mean absolute error (MAE), were used to evaluate the performance of CEREF model and compare with other models: the autoregressive (AR), RBFNN and EMD-RBFNN. Results indicated that the CEREF model had lower RMSE and MAE statistics, 42.8% and 7.6%, respectively, than did other models, and provided a superior alternative for forecasting annual runoff in the Wuding River basin. Moreover, the CEREF model can enlarge the effective intervals of streamflow forecasting compared to the EMD-RBFNN model by introducing the water demand planned by the government department to improve long-term prediction accuracy. In addition, we considered the high-frequency component, a frequent subject of concern in EMD-based forecasting, and results showed that removing high-frequency component is an effective measure to improve forecasting precision and is suggested for use with the CEREF model for better performance. Finally, the study concluded that the CEREF model can be used to forecast non-stationary annual streamflow change as a co-evolution of hydrologic and social systems with better accuracy. Also, the modification about removing high-frequency can further improve the performance of the CEREF model. It should be noted that the CEREF model is beneficial for data-driven hydrologic forecasting in complex socio-hydrologic systems, and as a simple data-driven socio-hydrologic forecasting model, deserves more attention.
ERIC Educational Resources Information Center
Elk, Seymour B.
1997-01-01
Suggests that the cross product of two vectors can be more easily and accurately explained by starting from the perspective of dyadics because then the concept of vector multiplication has a simple geometrical picture that encompasses both the dot and cross products in any number of dimensions in terms of orthogonal unit vector components. (AIM)
Towards improved capability and confidence in coupled atmospheric and wildland fire modeling
NASA Astrophysics Data System (ADS)
Sauer, Jeremy A.
This dissertation work is aimed at improving the capability and confidence in a modernized and improved version of Los Alamos National Laboratory's coupled atmospheric and wild- land fire dynamics model, Higrad-Firetec. Higrad is the hydrodynamics component of this large eddy simulation model that solves the three dimensional, fully compressible Navier-Stokes equations, incorporating a dynamic eddy viscosity formulation through a two-scale turbulence closure scheme. Firetec is the vegetation, drag forcing, and combustion physics portion that is integrated with Higrad. The modern version of Higrad-Firetec incorporates multiple numerical methodologies and high performance computing aspects which combine to yield a unique tool capable of augmenting theoretical and observational investigations in order to better understand the multi-scale, multi-phase, and multi-physics, phenomena involved in coupled atmospheric and environmental dynamics. More specifically, the current work includes extended functionality and validation efforts targeting component processes in coupled atmospheric and wildland fire scenarios. Since observational data of sufficient quality and resolution to validate the fully coupled atmosphere-wildfire scenario simply does not exist, we instead seek to validate components of the full prohibitively convoluted process. This manuscript provides first, an introduction and background into the application space of Higrad-Firetec. Second we document the model formulation, solution procedure, and a simple scalar transport verification exercise. Third, we perform a validate model results against observational data for time averaged flow field metrics in and above four idealized forest canopies. Fourth, we carry out a validation effort for the non-buoyant jet in a crossflow scenario (to which an analogy can be made for atmosphere-wildfire interactions) comparing model results to laboratory data of both steady-in-time and unsteady-in-time metrics. Finally, an extension of model multi-phase physics is implemented, allowing for the representation of multiple collocated fuels as separately evolving constituents leading to differences resulting rate of spread and total burned area. In combination these efforts demonstrate improved capability, increased validation of component functionality, and unique applicability the Higrad-Firetec modeling framework. As a result this work provides a substantially more robust foundation for future new, more widely acceptable investigations into the complexities of coupled atmospheric and wildland fire behavior.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A.; Arnold, Steven M.; Pineda, Evan J.
2016-01-01
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e., each individual grain. Two–three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities. PMID:28773458
SimpleBox 4.0: Improving the model while keeping it simple….
Hollander, Anne; Schoorl, Marian; van de Meent, Dik
2016-04-01
Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mechanism synthesis and 2-D control designs of an active three cable crane
NASA Technical Reports Server (NTRS)
Yang, Li-Farn; Mikulas, Martin M., Jr.
1992-01-01
A Lunar Crane with a suspension system based on a three cable mechanism is investigated to provide a stable end-effector for hoisting, positioning, and assembling large components during construction and servicing of a Lunar Base. The three cable suspension mechanism consists of a structural framework of three cables pointing to a common point that closely coincides with the suspended payload's center of gravity. The vibrational characteristics of this three cable suspension system are investigated by comparing a simple 2-D symmetric suspension model and a swinging pendulum in terms of their analytical natural frequency equations. A study is also made of actively controlling the dynamics of the crane using two different actuator concepts. Also, Lyapunov-based control algorithms are developed to determine two regulator-type control laws performing the system vibrational suppression for both system dynamics. Simulations including initial-valued dynamic responses as well as control performances for two different system dynamics are also presented.
NASA Astrophysics Data System (ADS)
Tian, M.; Katz, R. F.; Rees Jones, D. W.; May, D.
2017-12-01
Compared with other plate-tectonic boundaries, subduction zones (SZ) host the most drastic mechanical, thermal, and chemical changes. The transport of carbon through this complex environment is crucial to mantle carbon budget but remains the subject of active debate. Synthesis of field studies suggests that carbon subducted with the incoming slab is almost completely returned to the surface environment [Kelemen and Manning, 2015], whereas thermodynamic modelling indicates that a significant portion of carbon is retained in the slab and descends into the deep mantle [Gorman et al., 2006]. To address this controversy and quantify the carbon fluxes within SZs, it is necessary to treat the chemistry of fluid/volatile-rock interaction and the mechanics of porous fluid/volatile migration in a consistent modelling framework. This requirement is met by coupling a thermodynamic parameterization of de/re-volatilization with a two-phase flow model of subduction zones. The two-phase system is assumed to comprise three chemical components: rock containing only non-volatile oxides, H2O and CO2; the fluid phase includes only the latter two. Perple_X is used to map out the binary subsystems rock+H2O and rock+CO2; the results are parameterised in terms of volatile partition coefficients as a function of pressure and temperature. In synthesising the binary subsystems to describe phase equilibria that incorporate all three components, a Margules coefficient is introduced to account for non-ideal mixing of CO2/H2O in the fluid, such that the partition coefficients depend further on bulk composition. This procedure is applied to representative compositions of sediment, MORB, and gabbro for the slab, and peridotite for the mantle. The derived parameterization of each rock type serves as a lightweight thermodynamic module interfaceable with two-phase flow models of SZs. We demonstrate the application of this thermodynamic module through a simple model of carbon flux with a prescribed flow direction through (and out of) the slab. This model allows us to evaluate the effects of flow path and lithology on carbon storage within the slab.
Structured functional additive regression in reproducing kernel Hilbert spaces
Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen
2013-01-01
Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362
Atmostpheric simulations of extreme surface heating episodes on simple hills
W.E. Heilman
1992-01-01
A two-dimensional nonhydrostatic atmospheric model was used to simulate the circulation patterns (wind and vorticity) and turbulence energy fields associated with lines of extreme surface heating on simple two-dimensional hills. Heating-line locations and ambient crossflow conditions were varied to qualitatively determine the impact of terrain geometry on the...
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
A natural-language interface to a mobile robot
NASA Technical Reports Server (NTRS)
Michalowski, S.; Crangle, C.; Liang, L.
1987-01-01
The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
NASA Astrophysics Data System (ADS)
Demany, Laurent; Montandon, Gaspard; Semal, Catherine
2003-04-01
A listener's ability to compare two sounds separated by a silent time interval T is limited by a sum of ``sensory noise'' and ``memory noise.'' The present work was intended to test a model according to which these two components of internal noise are independent and, for a given sensory continuum, the memory noise depends only on T. In three experiments using brief sounds (<80 ms), pitch discrimination performances were measured in terms of d' as a function of T (0.1-4 s) and a physical parameter affecting the amount of sensory noise (pitch salience). As T increased, d' first increased rapidly and then declined more slowly. According to the tested model, the relative decline of d' beyond the optimal value of T should have been slower when pitch salience was low (large amount of sensory noise) than when pitch salience was high (small amount of sensory noise). However, this prediction was disproved in each of the three experiments. It was also found, when a ``roving'' procedure was used, that the optimal value of T was markedly shorter for very brief tone bursts (6 sine cycles) than for longer tone bursts (30 sine cycles).
NASA Astrophysics Data System (ADS)
Siahaan, N. M.; Harahap, A. S.; Nababan, E.; Siahaan, E.
2018-02-01
This study aims to initiate sustainable simple housing system based on low CO2 emissions at Griya Martubung I Housing Medan, Indonesia. Since it was built in 1995, between 2007 until 2016 approximately 89 percent of houses have been doing various home renewal such as restoration, renovation, or reconstruction. Qualitative research conducted in order to obtain insights into the behavior of complex relationship between various components of residential life support environment that relates to CO2 emissions. Each component is studied by conducting in-depth interviews, observation of the 128 residents. The study used Likert Scale to measure residents’ perception about components. The study concludes with a synthesis describing principles for a sustainable simple housing standard that recognizes the whole characteristics of components. This study offers a means for initiating the practice of sustainable simple housing developments and efforts to manage growth and preserve the environment without violating social, economics, and ecology.
A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake
NASA Astrophysics Data System (ADS)
Ruppert, Stanley D.; Yomogida, Kiyoshi
1992-09-01
Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.
Peridigm summary report : lessons learned in development with agile components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salinger, Andrew Gerhard; Mitchell, John Anthony; Littlewood, David John
2011-09-01
This report details efforts to deploy Agile Components for rapid development of a peridynamics code, Peridigm. The goal of Agile Components is to enable the efficient development of production-quality software by providing a well-defined, unifying interface to a powerful set of component-based software. Specifically, Agile Components facilitate interoperability among packages within the Trilinos Project, including data management, time integration, uncertainty quantification, and optimization. Development of the Peridigm code served as a testbed for Agile Components and resulted in a number of recommendations for future development. Agile Components successfully enabled rapid integration of Trilinos packages into Peridigm. A cost of thismore » approach, however, was a set of restrictions on Peridigm's architecture which impacted the ability to track history-dependent material data, dynamically modify the model discretization, and interject user-defined routines into the time integration algorithm. These restrictions resulted in modifications to the Agile Components approach, as implemented in Peridigm, and in a set of recommendations for future Agile Components development. Specific recommendations include improved handling of material states, a more flexible flow control model, and improved documentation. A demonstration mini-application, SimpleODE, was developed at the onset of this project and is offered as a potential supplement to Agile Components documentation.« less
NASA Astrophysics Data System (ADS)
LaManna, Joseph C.; Sun, Xiaoyan; Ivy, Andre D.; Ward, Nicole L.
We have used a relatively simple model of hypoxia that triggers adaptive structural changes in the cerebral microvasculature to study the process of physiological angiogenesis. This model can be used to obtain mechanistic data for the processes that probably underlie the dynamic structural changes that occur in learning and the control of oxygen availability to the neurovascular unit. These mechanisms are broadly involved in a wide variety of pathophysiological processes. This is the vascular component to CNS functional plasticity, supporting learning and adaptation. The angiogenic process may wane with age, contributing to the decreasing ability to survive metabolic stress and the diminution of neuronal plasticity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Michael; Cap, Jerome S.; Starr, Michael J.
One of the more severe environments for a store on an aircraft is during the ejection of the store. During this environment it is not possible to instrument all component responses, and it is also likely that some instruments may fail during the environment testing. This work provides a method for developing these responses from failed gages and uninstrumented locations. First, the forces observed by the store during the environment are reconstructed. A simple sampling method is used to reconstruct these forces given various parameters. Then, these forces are applied to a model to generate the component responses. Validation ismore » performed on this methodology.« less
Hybrid local-order mechanism for inversion symmetry breaking
NASA Astrophysics Data System (ADS)
Wolpert, Emma H.; Overy, Alistair R.; Thygesen, Peter M. M.; Simonov, Arkadiy; Senn, Mark S.; Goodwin, Andrew L.
2018-04-01
Using classical Monte Carlo simulations, we study a simple statistical mechanical model of relevance to the emergence of polarization from local displacements on the square and cubic lattices. Our model contains two key ingredients: a Kitaev-like orientation-dependent interaction between nearest neighbors and a steric term that acts between next-nearest neighbors. Taken by themselves, each of these two ingredients is incapable of driving long-range symmetry breaking, despite the presence of a broad feature in the corresponding heat-capacity functions. Instead, each component results in a "hidden" transition on cooling to a manifold of degenerate states; the two manifolds are different in the sense that they reflect distinct types of local order. Remarkably, their intersection, i.e., the ground state when both interaction terms are included in the Hamiltonian, supports a spontaneous polarization. In this way, our study demonstrates how local-order mechanisms might be combined to break global inversion symmetry in a manner conceptually similar to that operating in the "hybrid" improper ferroelectrics. We discuss the relevance of our analysis to the emergence of spontaneous polarization in well-studied ferroelectrics such as BaTiO3 and KNbO3.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
NASA Astrophysics Data System (ADS)
Zhu, Ying; Tan, Tuck Lee
2016-04-01
An effective and simple analytical method using Fourier transform infrared (FTIR) spectroscopy to distinguish wild-grown high-quality Ganoderma lucidum (G. lucidum) from cultivated one is of essential importance for its quality assurance and medicinal value estimation. Commonly used chemical and analytical methods using full spectrum are not so effective for the detection and interpretation due to the complex system of the herbal medicine. In this study, two penalized discriminant analysis models, penalized linear discriminant analysis (PLDA) and elastic net (Elnet),using FTIR spectroscopy have been explored for the purpose of discrimination and interpretation. The classification performances of the two penalized models have been compared with two widely used multivariate methods, principal component discriminant analysis (PCDA) and partial least squares discriminant analysis (PLSDA). The Elnet model involving a combination of L1 and L2 norm penalties enabled an automatic selection of a small number of informative spectral absorption bands and gave an excellent classification accuracy of 99% for discrimination between spectra of wild-grown and cultivated G. lucidum. Its classification performance was superior to that of the PLDA model in a pure L1 setting and outperformed the PCDA and PLSDA models using full wavelength. The well-performed selection of informative spectral features leads to substantial reduction in model complexity and improvement of classification accuracy, and it is particularly helpful for the quantitative interpretations of the major chemical constituents of G. lucidum regarding its anti-cancer effects.
Using the Wiimote in Introductory Physics Experiments
ERIC Educational Resources Information Center
Ochoa, Romulo; Rooney, Frank G.; Somers, William J.
2011-01-01
The Wii is a very popular gaming console. An important component of its appeal is the ease of use of its remote controller, popularly known as a Wiimote. This simple-looking but powerful device has a three-axis accelerometer and communicates with the console via Bluetooth protocol. We present two experiments that demonstrate the feasibility of…
Psychological Distance to Reward: Effects of S+ Duration and the Delay Reduction It Signals
ERIC Educational Resources Information Center
Alessandri, Jerome; Stolarz-Fantino, Stephanie; Fantino, Edmund
2011-01-01
A concurrent-chains procedure was used to examine choice between segmented (two-component chained schedules) and unsegmented schedules (simple schedules) in terminal links with equal inter-reinforcement intervals. Previous studies using this kind of experimental procedure showed preference for unsegmented schedules for both pigeons and humans. In…
A Simple 2-Transistor Touch or Lick Detector Circuit
ERIC Educational Resources Information Center
Slotnick, Burton
2009-01-01
Contact or touch detectors in which a subject acts as a switch between two metal surfaces have proven more popular and arguably more useful for recording responses than capacitance switches, photocell detectors, and force detectors. Components for touch detectors circuits are inexpensive and, except for some special purpose designs, can be easily…
Kobayashi, Seiji
2002-05-10
A point-spread function (PSF) is commonly used as a model of an optical disk readout channel. However, the model given by the PSF does not contain the quadratic distortion generated by the photo-detection process. We introduce a model for calculating an approximation of the quadratic component of a signal. We show that this model can be further simplified when a read-only-memory (ROM) disk is assumed. We introduce an edge-spread function by which a simple nonlinear model of an optical ROM disk readout channel is created.
Improved Cook-off Modeling of Multi-component Cast Explosives
NASA Astrophysics Data System (ADS)
Nichols, Albert
2017-06-01
In order to understand the hazards associated with energetic materials, it is important to understand their behavior in adverse thermal environments. These processes have been relatively well understood for solid explosives, however, the same cannot be said for multi-component melt-cast explosives. Here we describe the continued development of ALE3D, a coupled thermal/chemical/mechanical code, to improve its description of fluid explosives. The improved physics models include: 1) Chemical potential driven species segregation. This model allows us to model the complex flow fields associated with the melting and decomposing Comp-B, where the denser RDX tends to settle and the decomposing gasses rise, 2) Automatically scaled stream-wise diffusion model for thermal, species, and momentum diffusion. These models add sufficient numerical diffusion in the direction of flow to maintain numerical stability when the system is under resolved, as occurs for large systems. And 3) a slurry viscosity model, required to properly define the flow characteristics of the multi-component fluidized system. These models will be demonstrated on a simple Comp-B system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Composite fermion basis for two-component Bose gases
NASA Astrophysics Data System (ADS)
Meyer, Marius; Liabotro, Ola
The composite fermion (CF) construction is known to produce wave functions that are not necessarily orthogonal, or even linearly independent, after projection. While usually not a practical issue in the quantum Hall regime, we have previously shown that it presents a technical challenge for rotating Bose gases with low angular momentum. These are systems where the CF approach yield surprisingly good approximations to the exact eigenstates of weak short-range interactions, and so solving the problem of linearly dependent wave functions is of interest. It can also be useful for studying CF excitations for fermions. Here we present several ways of constructing a basis for the space of ``simple CF states'' for two-component rotating Bose gases in the lowest Landau level, and prove that they all give a basis. Using the basis, we study the structure of the lowest-lying state using so-called restricted wave functions. We also examine the scaling of the overlap between the exact and CF wave functions at the maximal possible angular momentum for simple states. This work was financially supported by the Research Council of Norway.
The ALMA-PILS survey: 3D modeling of the envelope, disks and dust filament of IRAS 16293-2422
NASA Astrophysics Data System (ADS)
Jacobsen, S. K.; Jørgensen, J. K.; van der Wiel, M. H. D.; Calcutt, H.; Bourke, T. L.; Brinch, C.; Coutens, A.; Drozdovskaya, M. N.; Kristensen, L. E.; Müller, H. S. P.; Wampfler, S. F.
2018-04-01
Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims: The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods: We present 13CO, C17O and C18O J = 3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results: Radiative transfer modeling in our sampled parameter space suggests that, while the disk around source A could not be constrained, the disk around source B has to be vertically extended. This puffed-up structure can be obtained with both a protoplanetary disk model with an unexpectedly high scale-height and with the density solution from an infalling, rotating collapse. Combined constraints on our 3D model, from observed dust continuum and CO isotopologue emission between the sources, corroborate that source A should be at least six times more luminous than source B. We also demonstrate that the volume of high-temperature regions where complex organic molecules arise is sensitive to whether or not the total luminosity is in a single radiation source or distributed into two sources, affecting the interpretation of earlier chemical modeling efforts of the IRAS 16293-2422 hot corino which used a single-source approximation. Conclusions: Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA 18 L⊙ and LB 3 L⊙.
NASA Astrophysics Data System (ADS)
Menon, Vikram; Fu, Qingxi; Janardhanan, Vinod M.; Deutschmann, Olaf
2015-01-01
High temperature co-electrolysis of H2O and CO2 offers a promising route for syngas (H2, CO) production via efficient use of heat and electricity. The performance of a SOEC during co-electrolysis is investigated by focusing on the interactions between transport processes and electrochemical parameters. Electrochemistry at the three-phase boundary is modeled by a modified Butler-Volmer approach that considers H2O electrolysis and CO2 electrolysis, individually, as electrochemically active charge transfer pathways. The model is independent of the geometrical structure. A 42-step elementary heterogeneous reaction mechanism for the thermo-catalytic chemistry in the fuel electrode, the dusty gas model (DGM) to account for multi-component diffusion through porous media, and a plug flow model for flow through the channels are used in the model. Two sets of experimental data are reproduced by the simulations, in order to deduce parameters of the electrochemical model. The influence of micro-structural properties, inlet cathode gas velocity, and temperature are discussed. Reaction flow analysis is performed, at OCV, to study methane production characteristics and kinetics during co-electrolysis. Simulations are carried out for configurations ranging from simple one-dimensional electrochemical button cells to quasi-two-dimensional co-flow planar cells, to demonstrate the effectiveness of the computational tool for performance and design optimization.