Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A
2010-11-01
US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
Non-Deterministic Modelling of Food-Web Dynamics
Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam
2014-01-01
A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245
ERIC Educational Resources Information Center
Raven, Bertram H.
The history and background of the analysis of the basis of power is examined, beginning with its origins in the works of Kurt Lewin and his followers at the Research Center for Group dynamics. The original French and Raven (1959) bases of power model posited six bases of power: reward, coercion, legitimate, expert, referent, and informational (or…
Assessment of Energy Efficient and Model Based Control
2017-06-15
ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig Lennon...originator. ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emond, Claude, E-mail: claude.emond@biosmc.com
Chlorinated dibenzo-p-dioxins (CDDs) are a series of mono- to octa-chlorinated homologous chemicals commonly referred to as polychlorinated dioxins. One of the most potent, well-known, and persistent member of this family is 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). As part of translational research to make computerized models accessible to health risk assessors, we present a Berkeley Madonna recoded version of the human physiologically based pharmacokinetic (PBPK) model used by the U.S. Environmental Protection Agency (EPA) in the recent dioxin assessment. This model incorporates CYP1A2 induction, which is an important metabolic vector that drives dioxin distribution in the human body, and it uses a variable eliminationmore » half-life that is body burden dependent. To evaluate the model accuracy, the recoded model predictions were compared with those of the original published model. The simulations performed with the recoded model matched well with those of the original model. The recoded model was then applied to available data sets of real life exposure studies. The recoded model can describe acute and chronic exposures and can be useful for interpreting human biomonitoring data as part of an overall dioxin and/or dioxin-like compounds risk assessment. - Highlights: • The best available dioxin PBPK model for interpreting human biomonitoring data is presented. • The original PBPK model was recoded from acslX to the Berkeley Madonna (BM) platform. • Comparisons were made of the accuracy of the recoded model with the original model. • The model is a useful addition to the ATSDR's BM based PBPK toolkit that supports risk assessors. • The application of the model to real-life exposure data sets is illustrated.« less
Dynamic Emulation Modelling (DEMo) of large physically-based environmental models
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2012-12-01
In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. Numerical results from the application of the approach to the reduction of 3D coupled hydrodynamic-ecological models in several real world case studies, including Marina Reservoir (Singapore) and Googong Reservoir (Australia), are illustrated.
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
Implementing Set Based Design into Department of Defense Acquisition
2016-12-01
challenges for the DOD. This report identifies the original SBD principles and characteristics based on Toyota Motor Corporation’s Set Based Concurrent...Engineering Model. Additionally, the team reviewed DOD case studies that implemented SBD. The SBD principles , along with the common themes from the...perennial challenges for the DOD. This report identifies the original SBD principles and characteristics based on Toyota Motor Corporation’s Set
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
NASA Astrophysics Data System (ADS)
Tessera, Marc
2017-03-01
The search for origin of `life' is made even more complicated by differing definitions of the subject matter, although a general consensus is that an appropriate definition should center on Darwinian evolution (Cleland and Chyba 2002). Within a physical approach which has been defined as a level-4 evolution (Tessera and Hoelzer 2013), one mechanism could be described showing that only three conditions are required to allow natural selection to apply to populations of different system lineages. This approach leads to a vesicle- based model with the necessary properties. Of course such a model has to be tested. Thus, after a brief presentation of the model an experimental program is proposed that implements the different steps able to show whether this new direction of the research in the field is valid and workable.
Tessera, Marc
2017-03-01
The search for origin of 'life' is made even more complicated by differing definitions of the subject matter, although a general consensus is that an appropriate definition should center on Darwinian evolution (Cleland and Chyba 2002). Within a physical approach which has been defined as a level-4 evolution (Tessera and Hoelzer 2013), one mechanism could be described showing that only three conditions are required to allow natural selection to apply to populations of different system lineages. This approach leads to a vesicle- based model with the necessary properties. Of course such a model has to be tested. Thus, after a brief presentation of the model an experimental program is proposed that implements the different steps able to show whether this new direction of the research in the field is valid and workable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Yunyun; Li Zhenhua; Song Yang
2009-05-01
An extended model of the original Gladstone-Dale (G-D) equation is proposed for optical computerized tomography (OCT) diagnosis of flame flow fields. For the purpose of verifying the newly established model, propane combustion is used as a practical example for experiment, and moire deflection tomography is introduced with the probe wavelength 808 nm. The results indicate that the temperature based on the extended model is more accurate than that based on the original G-D equation. In a word, the extended model can be suitable for all kinds of flame flow fields whatever the components, temperature, and ionization are.
Introductory Biology Students’ Conceptual Models and Explanations of the Origin of Variation
Shaw, Neil; Momsen, Jennifer; Reinagel, Adam; Le, Paul; Taqieddin, Ranya; Long, Tammy
2014-01-01
Mutation is the key molecular mechanism generating phenotypic variation, which is the basis for evolution. In an introductory biology course, we used a model-based pedagogy that enabled students to integrate their understanding of genetics and evolution within multiple case studies. We used student-generated conceptual models to assess understanding of the origin of variation. By midterm, only a small percentage of students articulated complete and accurate representations of the origin of variation in their models. Targeted feedback was offered through activities requiring students to critically evaluate peers’ models. At semester's end, a substantial proportion of students significantly improved their representation of how variation arises (though one-third still did not include mutation in their models). Students’ written explanations of the origin of variation were mostly consistent with their models, although less effective than models in conveying mechanistic reasoning. This study contributes evidence that articulating the genetic origin of variation is particularly challenging for learners and may require multiple cycles of instruction, assessment, and feedback. To support meaningful learning of the origin of variation, we advocate instruction that explicitly integrates multiple scales of biological organization, assessment that promotes and reveals mechanistic and causal reasoning, and practice with explanatory models with formative feedback. PMID:25185235
AveBoost2: Boosting for Noisy Data
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.
2004-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.
Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering
2014-06-01
x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF ACRONYMS AND ABBREVIATIONS BOM Base Object Model BPMN Business Process Model & Notation DOD...SysML. There are many variants such as the Unified Profile for DODAF/MODAF (UPDM) and Business Process Model & Notation ( BPMN ) that have origins in
'For the benefit of the people': the Dutch translation of the Fasciculus medicinae, Antwerp 1512.
Coppens, Christian
2009-01-01
The article deals with the Dutch translation of the Fasciculus medicinae based on the Latin edition, Venice 1495, with the famous woodcuts created in 1494 for the Italian translation of the original Latin edition of 1491. The woodcuts are compared with the Venetian model. New features in the Antwerp edition include the Skeleton and the Zodiac Man, bot originally based on German models. The text also deals with other woodcuts in the Low Countries based on these Venetian illustrations. The Appendices provide a short title catalog of all the editions and translations based on the Venetian edition and a stemma.
Study on Fluid-solid Coupling Mathematical Models and Numerical Simulation of Coal Containing Gas
NASA Astrophysics Data System (ADS)
Xu, Gang; Hao, Meng; Jin, Hongwei
2018-02-01
Based on coal seam gas migration theory under multi-physics field coupling effect, fluid-solid coupling model of coal seam gas was build using elastic mechanics, fluid mechanics in porous medium and effective stress principle. Gas seepage behavior under different original gas pressure was simulated. Results indicated that residual gas pressure, gas pressure gradient and gas low were bigger when original gas pressure was higher. Coal permeability distribution decreased exponentially when original gas pressure was lower than critical pressure. Coal permeability decreased rapidly first and then increased slowly when original pressure was higher than critical pressure.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
"Shape function + memory mechanism"-based hysteresis modeling of magnetorheological fluid actuators
NASA Astrophysics Data System (ADS)
Qian, Li-Jun; Chen, Peng; Cai, Fei-Long; Bai, Xian-Xu
2018-03-01
A hysteresis model based on "shape function + memory mechanism" is presented and its feasibility is verified through modeling the hysteresis behavior of a magnetorheological (MR) damper. A hysteresis phenomenon in resistor-capacitor (RC) circuit is first presented and analyzed. In the hysteresis model, the "memory mechanism" originating from the charging and discharging processes of the RC circuit is constructed by adopting a virtual displacement variable and updating laws for the reference points. The "shape function" is achieved and generalized from analytical solutions of the simple semi-linear Duhem model. Using the approach, the memory mechanism reveals the essence of specific Duhem model and the general shape function provides a direct and clear means to fit the hysteresis loop. In the frame of the structure of a "Restructured phenomenological model", the original hysteresis operator, i.e., the Bouc-Wen operator, is replaced with the new hysteresis operator. The comparative work with the Bouc-Wen operator based model demonstrates superior performances of high computational efficiency and comparable accuracy of the new hysteresis operator-based model.
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao; Wang, Yuanzhong
2018-01-15
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 192 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao
2018-01-01
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 184 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms. PMID:29342969
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors. PMID:27536235
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Original analytic solution of a half-bridge modelled as a statically indeterminate system
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela
2016-12-01
The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.
Beyond the Central Dogma: Model-Based Learning of How Genes Determine Phenotypes
ERIC Educational Resources Information Center
Reinagel, Adam; Speth, Elena Bray
2016-01-01
In an introductory biology course, we implemented a learner-centered, model-based pedagogy that frequently engaged students in building conceptual models to explain how genes determine phenotypes. Model-building tasks were incorporated within case studies and aimed at eliciting students' understanding of 1) the origin of variation in a population…
Stock price forecasting based on time series analysis
NASA Astrophysics Data System (ADS)
Chi, Wan Le
2018-05-01
Using the historical stock price data to set up a sequence model to explain the intrinsic relationship of data, the future stock price can forecasted. The used models are auto-regressive model, moving-average model and autoregressive-movingaverage model. The original data sequence of unit root test was used to judge whether the original data sequence was stationary. The non-stationary original sequence as a first order difference needed further processing. Then the stability of the sequence difference was re-inspected. If it is still non-stationary, the second order differential processing of the sequence is carried out. Autocorrelation diagram and partial correlation diagram were used to evaluate the parameters of the identified ARMA model, including coefficients of the model and model order. Finally, the model was used to forecast the fitting of the shanghai composite index daily closing price with precision. Results showed that the non-stationary original data series was stationary after the second order difference. The forecast value of shanghai composite index daily closing price was closer to actual value, indicating that the ARMA model in the paper was a certain accuracy.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
NASA Astrophysics Data System (ADS)
Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.
2009-10-01
In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.
The origin and evolution of model organisms
NASA Technical Reports Server (NTRS)
Hedges, S. Blair
2002-01-01
The phylogeny and timescale of life are becoming better understood as the analysis of genomic data from model organisms continues to grow. As a result, discoveries are being made about the early history of life and the origin and development of complex multicellular life. This emerging comparative framework and the emphasis on historical patterns is helping to bridge barriers among organism-based research communities.
Midtvedt, Daniel; Croy, Alexander
2016-06-10
We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.
Responder analysis without dichotomization.
Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li
2016-01-01
In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.
A new RISE-based adaptive control of PKMs: design, stability analysis and experiments
NASA Astrophysics Data System (ADS)
Bennehar, M.; Chemori, A.; Bouri, M.; Jenni, L. F.; Pierrot, F.
2018-03-01
This paper deals with the development of a new adaptive control scheme for parallel kinematic manipulators (PKMs) based on Rrbust integral of the sign of the error (RISE) control theory. Original RISE control law is only based on state feedback and does not take advantage of the modelled dynamics of the manipulator. Consequently, the overall performance of the resulting closed-loop system may be poor compared to modern advanced model-based control strategies. We propose in this work to extend RISE by including the nonlinear dynamics of the PKM in the control loop to improve its overall performance. More precisely, we augment original RISE control scheme with a model-based adaptive control term to account for the inherent nonlinearities in the closed-loop system. To demonstrate the relevance of the proposed controller, real-time experiments are conducted on the Delta robot, a three-degree-of-freedom (3-DOF) PKM.
Three hybridization models based on local search scheme for job shop scheduling problem
NASA Astrophysics Data System (ADS)
Balbi Fraga, Tatiana
2015-05-01
This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.
Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E
2014-09-23
Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
A Bayesian spawning habitat suitability model for American shad in southeastern United States rivers
Hightower, Joseph E.; Harris, Julianne E.; Raabe, Joshua K.; Brownell, Prescott; Drew, C. Ashton
2012-01-01
Habitat suitability index models for American shad Alosa sapidissima were developed by Stier and Crance in 1985. These models, which were based on a combination of published information and expert opinion, are often used to make decisions about hydropower dam operations and fish passage. The purpose of this study was to develop updated habitat suitability index models for spawning American shad in the southeastern United States, building on the many field and laboratory studies completed since 1985. We surveyed biologists who had knowledge about American shad spawning grounds, assembled a panel of experts to discuss important habitat variables, and used raw data from published and unpublished studies to develop new habitat suitability curves. The updated curves are based on resource selection functions, which can model habitat selectivity based on use and availability of particular habitats. Using field data collected in eight rivers from Virginia to Florida (Mattaponi, Pamunkey, Roanoke, Tar, Neuse, Cape Fear, Pee Dee, St. Johns), we obtained new curves for temperature, current velocity, and depth that were generally similar to the original models. Our new suitability function for substrate was also similar to the original pattern, except that sand (optimal in the original model) has a very low estimated suitability. The Bayesian approach that we used to develop habitat suitability curves provides an objective framework for updating the model as new studies are completed and for testing the model's applicability in other parts of the species' range.
[Case finding in early prevention networks - a heuristic for ambulatory care settings].
Barth, Michael; Belzer, Florian
2016-06-01
One goal of early prevention is the support of families with small children up to three years who are exposed to psychosocial risks. The identification of these cases is often complex and not well-directed, especially in the ambulatory care setting. Development of a model of a feasible and empirical based strategy for case finding in ambulatory care. Based on the risk factors of postpartal depression, lack of maternal responsiveness, parental stress with regulation disorders and poverty a lexicographic and non-compensatory heuristic model with simple decision rules, will be constructed and empirically tested. Therefore the original data set from an evaluation of the pediatric documentary form on psychosocial issues of families with small children in well-child visits will be used and reanalyzed. The first diagnostic step in the non-compensatory and hierarchical classification process is the assessment of postpartal depression followed by maternal responsiveness, parental stress and poverty. The classification model identifies 89.0 % cases from the original study. Compared to the original study the decision process becomes clearer and more concise. The evidence-based and data-driven model exemplifies a strategy for the assessment of psychosocial risk factors in ambulatory care settings. It is based on four evidence-based risk factors and offers a quick and reliable classification. A further advantage of this model is that after a risk factor is identified the diagnostic procedure will be stopped and the counselling process can commence. For further validation of the model studies, in well suited early prevention networks are needed.
Bearing capacity analysis and design of highway base materials reinforced with geofabrics.
DOT National Transportation Integrated Search
2005-06-01
The primary objective of this study was to develop and implement mathematical bearing capacity models originally proposed by Hopkins (1988, 1991) and Slepak and Hopkins (1993; 1995). These advanced models, which are based on limit equilibrium and are...
Adjacent-Categories Mokken Models for Rater-Mediated Assessments
Wind, Stefanie A.
2016-01-01
Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
NASA Astrophysics Data System (ADS)
Lai, J.-S.; Tsai, F.; Chiang, S.-H.
2016-06-01
This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.
Lee-Carter state space modeling: Application to the Malaysia mortality data
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-06-01
This article presents an approach that formalizes the Lee-Carter (LC) model as a state space model. Maximum likelihood through Expectation-Maximum (EM) algorithm was used to estimate the model. The methodology is applied to Malaysia's total population mortality data. Malaysia's mortality data was modeled based on age specific death rates (ASDR) data from 1971-2009. The fitted ASDR are compared to the actual observed values. However, results from the comparison of the fitted and actual values between LC-SS model and the original LC model shows that the fitted values from the LC-SS model and original LC model are quite close. In addition, there is not much difference between the value of root mean squared error (RMSE) and Akaike information criteria (AIC) from both models. The LC-SS model estimated for this study can be extended for forecasting ASDR in Malaysia. Then, accuracy of the LC-SS compared to the original LC can be further examined by verifying the forecasting power using out-of-sample comparison.
NASA Astrophysics Data System (ADS)
Munahefi, D. N.; Waluya, S. B.; Rochmad
2018-03-01
The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.
Potential formulation of sleep dynamics
NASA Astrophysics Data System (ADS)
Phillips, A. J. K.; Robinson, P. A.
2009-02-01
A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.
CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge
ERIC Educational Resources Information Center
Andjelic, Svetlana; Cekerevac, Zoran
2014-01-01
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
NASA Astrophysics Data System (ADS)
Li, Ning; Wang, Yan; Xu, Kexin
2006-08-01
Combined with Fourier transform infrared (FTIR) spectroscopy and three kinds of pattern recognition techniques, 53 traditional Chinese medicine danshen samples were rapidly discriminated according to geographical origins. The results showed that it was feasible to discriminate using FTIR spectroscopy ascertained by principal component analysis (PCA). An effective model was built by employing the Soft Independent Modeling of Class Analogy (SIMCA) and PCA, and 82% of the samples were discriminated correctly. Through use of the artificial neural network (ANN)-based back propagation (BP) network, the origins of danshen were completely classified.
Development of the AFRL Aircrew Perfomance and Protection Data Bank
2007-12-01
Growth model and statistical model of hypobaric chamber simulations. It offers a quick and readily accessible online DCS risk assessment tool for...are used for the DCS prediction instead of the original model. ADRAC is based on more than 20 years of hypobaric chamber studies using human...prediction based on the combined Bubble Growth model and statistical model of hypobaric chamber simulations was integrated into the Data Bank. It
Cells of origin of ovarian cancer: ovarian surface epithelium or fallopian tube?
Klotz, Daniel Martin; Wimberger, Pauline
2017-12-01
Ovarian cancer is the fifth most common cancer in women and one of the leading causes of death from gynecological malignancies. Despite of its clinical importance, ovarian tumorigenesis is poorly understood and prognosis remains poor. This is particularly true for the most common type of ovarian cancer, high-grade serous ovarian cancer. Two models are considered, whether it arises from the ovarian surface epithelium or from the fallopian tube. The first model is based on (1) the pro-inflammatory environment caused by ovulation events, (2) the expression pattern of ovarian inclusion cysts, and (3) biomarkers that are shared by the ovarian surface epithelium and malignant growth. The model suggesting a non-ovarian origin is based on (1) tubal precursor lesions, (2) genetic evidence of BRCA1/2 mutation carriers, and (3) recent animal studies. Neither model has clearly demonstrated superiority over the other. Therefore, one can speculate that high-grade serous ovarian cancer may arise from two different sites that undergo similar changes. Both tissues are derived from the same embryologic origin, which may explain how progenitor cells from different sites can respond similar to stimuli within the ovaries. However, distinct molecular drivers, such as BRCA deficiency, may still preferentially arise from one site of origin as precancerous mutations are frequently seen in the fallopian tube. Confirming the origin of ovarian cancer has important clinical implications when deciding on cancer risk-reducing prophylactic surgery. It will be important to identify key biomarker to uncover the sequence of ovarian tumorigenesis.
Revising a conceptual model of partnership and sustainability in global health.
Upvall, Michele J; Leffers, Jeanne M
2018-05-01
Models to guide global health partnerships are rare in the nursing literature. The Conceptual Model for Partnership and Sustainability in Global Health while significant was based on Western perspectives. The purpose of this study was to revise the model to include the voice of nurses from low- and middle-resource countries. Grounded theory was used to maintain fidelity with the design in the original model. A purposive sample of 15 participants from a variety of countries in Africa, the Caribbean, and Southeast Asia and having extensive experience in global health partnerships were interviewed. Skype recordings and in-person interviews were audiotaped using the same questions as the original study. Theoretical coding and a comparison of results with the original study was completed independently by the researchers. The process of global health partnerships was expanded from the original model to include engagement processes and processes for ongoing partnership development. New concepts of Transparency, Expanded World View, and Accompaniment were included as well as three broad themes: Geopolitical Influence, Power differential/Inequities, and Collegial Friendships. The revised conceptual model embodies a more comprehensive model of global health partnerships with representation of nurses from low- and middle-resource countries. © 2018 Wiley Periodicals, Inc.
Frähmcke, Jan S; Wanko, Marius; Elstner, Marcus
2012-03-15
Understanding the mechanism of color tuning of the retinal chromophore by its host protein became one of the key issues in the research on rhodopsins. While early mutation studies addressed its genetic origin, recent studies advanced to investigate its structural origin, based on X-ray crystallographic structures. For the human cone pigments, no crystal structures have been produced, and homology models were employed to elucidate the origin of its blue-shifted absorption. In this theoretical study, we take a different route to establish a structural model for human blue. Starting from the well-resolved structure of bovine rhodopsin, we derive multiple mutant models by stepwise mutation and equilibration using molecular dynamics simulations in a hybrid quantum mechanics/molecular mechanics framework. Our 30fold mutant reproduces the experimental UV-vis absorption shift of 0.45 eV and provides new insights about both structural and genetic factors that affect the excitation energy. Electrostatic effects of individual amino acids and collaborative structural effects are analyzed using semiempirical (OM2/MRCI) and ab initio (SORCI) multireference approaches. © 2012 American Chemical Society
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Solar Corona/Wind Composition and Origins of the Solar Wind
NASA Astrophysics Data System (ADS)
Lepri, S. T.; Gilbert, J. A.; Landi, E.; Shearer, P.; von Steiger, R.; Zurbuchen, T.
2014-12-01
Measurements from ACE and Ulysses have revealed a multifaceted solar wind, with distinctly different kinetic and compositional properties dependent on the source region of the wind. One of the major outstanding issues in heliophysics concerns the origin and also predictability of quasi-stationary slow solar wind. While the fast solar wind is now proven to originate within large polar coronal holes, the source of the slow solar wind remains particularly elusive and has been the subject of long debate, leading to models that are stationary and also reconnection based - such as interchange or so-called S-web based models. Our talk will focus on observational constraints of solar wind sources and their evolution during the solar cycle. In particular, we will point out long-term variations of wind composition and dynamic properties, particularly focused on the abundance of elements with low First Ionization Potential (FIP), which have been routinely measured on both ACE and Ulysses spacecraft. We will use these in situ observations, and remote sensing data where available, to provide constraints for solar wind origin during the solar cycle, and on their correspondence to predictions for models of the solar wind.
Application of LogitBoost Classifier for Traceability Using SNP Chip Data
Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok
2015-01-01
Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability. PMID:26436917
Application of LogitBoost Classifier for Traceability Using SNP Chip Data.
Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok
2015-01-01
Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.
NASA Astrophysics Data System (ADS)
Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li
2017-01-01
Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.
Löb, D; Lengert, N; Chagin, V O; Reinhart, M; Casas-Delucchi, C S; Cardoso, M C; Drossel, B
2016-04-07
DNA replication dynamics in cells from higher eukaryotes follows very complex but highly efficient mechanisms. However, the principles behind initiation of potential replication origins and emergence of typical patterns of nuclear replication sites remain unclear. Here, we propose a comprehensive model of DNA replication in human cells that is based on stochastic, proximity-induced replication initiation. Critical model features are: spontaneous stochastic firing of individual origins in euchromatin and facultative heterochromatin, inhibition of firing at distances below the size of chromatin loops and a domino-like effect by which replication forks induce firing of nearby origins. The model reproduces the empirical temporal and chromatin-related properties of DNA replication in human cells. We advance the one-dimensional DNA replication model to a spatial model by taking into account chromatin folding in the nucleus, and we are able to reproduce the spatial and temporal characteristics of the replication foci distribution throughout S-phase.
Melkikh, Alexey V; Khrennikov, Andrei
2017-11-01
A review of the mechanisms of speciation is performed. The mechanisms of the evolution of species, taking into account the feedback of the state of the environment and mechanisms of the emergence of complexity, are considered. It is shown that these mechanisms, at the molecular level, cannot work steadily in terms of classical mechanics. Quantum mechanisms of changes in the genome, based on the long-range interaction potential between biologically important molecules, are proposed as one of possible explanation. Different variants of interactions of the organism and environment based on molecular recognition and leading to new species origins are considered. Experiments to verify the model are proposed. This bio-physical study is completed by the general operational model of based on quantum information theory. The latter is applied to model of epigenetic evolution. We briefly present the basics of the quantum-like approach to modeling of bio-informational processes. This approach is illustrated by the quantum-like model of epigenetic evolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new conceptual model for whole mantle convection and the origin of hotspot plumes
NASA Astrophysics Data System (ADS)
Yoshida, Masaki
2014-08-01
A new conceptual model of mantle convection is constructed for consideration of the origin of hotspot plumes, using recent evidence from seismology, high-pressure experiments, geodynamic modeling, geoid inversion studies, and post-glacial rebound analyses. This conceptual model delivers several key points. Firstly, some of the small-scale mantle upwellings observed as hotspots on the Earth's surface originate at the base of the mantle transition zone (MTZ), in which the Archean granitic continental material crust (TTG; tonalite-trondhjemite-granodiorite) with abundant radiogenic elements is accumulated. Secondly, the TTG crust and the subducted oceanic crust that have accumulated at the base of MTZ could act as thermal or mechanical insulators, leading to the formation of a hot and less viscous layer just beneath the MTZ; which may enhance the instability of plume generation at the base of the MTZ. Thirdly, the origin of some hotspot plumes is isolated from the large low shear-wave velocity provinces (LLSVPs) under Africa and the South Pacific. I consider that the conceptual model explains why almost all the hotspots around Africa are located above the margins of the African LLSVP. Because a planetary-scale trench system surrounding a “Pangean cell” has been spatially stable throughout the Phanerozoic, a large amount of the oceanic crustal layer is likely to be trapped in the MTZ under the Pangean cell. Therefore, under Africa, almost all of the hotspot plumes originate from the base of the MTZ, where a large amount of TTG and/or oceanic crusts has accumulated. This conceptual model may explain the fact that almost all the hotspots around Africa are located on margins above the African LLSVP. It is also considered that some of the hotspot plumes under the South Pacific thread through the TTG/oceanic crusts accumulated around the bottom of the MTZ, and some have their roots in the South Pacific LLSVP while others originate from the MTZ. The numerical simulations of mantle convection also speculate that the Earth's mantle convection is not thermally double-layered at the ringwoodite to perovskite + magnesiowüstite (Rw → Pv + Mw) phase boundary, because of its gentle negative Clapeyron slope. This is in contrast with some traditional images of mantle convection that have independent convection cells between the upper and lower mantle. These numerical studies speculate that the generation of stagnant slab at the base of the MTZ (as seismically observed globally) may not be due to the negative Clapeyron slope, and may instead be related to a viscosity increase (i.e., a viscosity jump) at the Rw → Pv + Mw phase boundary, or to a chemically stratified boundary between the upper and the lower mantle, as suggested by a recent high-pressure experiment.
Modeling and experimental study of resistive switching in vertically aligned carbon nanotubes
NASA Astrophysics Data System (ADS)
Ageev, O. A.; Blinov, Yu F.; Ilina, M. V.; Ilin, O. I.; Smirnov, V. A.
2016-08-01
Model of the resistive switching in vertically aligned carbon nanotube (VA CNT) taking into account the processes of deformation, polarization and piezoelectric charge accumulation have been developed. Origin of hysteresis in VA CNT-based structure is described. Based on modeling results the VACNTs-based structure has been created. The ration resistance of high-resistance to low-resistance states of the VACNTs-based structure amounts 48. The correlation the modeling results with experimental studies is shown. The results can be used in the development nanoelectronics devices based on VA CNTs, including the nonvolatile resistive random-access memory.
Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)
NASA Astrophysics Data System (ADS)
Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2017-07-01
The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the original training on satellite data. Hence, the new version allows for an implementation into climate models in combination with an existing stratospheric transport scheme. Finally, the model is now formulated on several vertical levels encompassing the vertical range in which polar ozone depletion is observed. The results of the Polar SWIFT model are validated with independent Microwave Limb Sounder (MLS) satellite observations and output from the original detailed chemistry model of ATLAS.
NASA Technical Reports Server (NTRS)
Reil, Robin L.
2014-01-01
Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the "traditional" document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This paper presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magic's MagicDraw modeling tool. The model incorporates mission concept and requirement information from the mission's original DBSE design efforts. Active dependency relationships are modeled to demonstrate the completeness and consistency of the requirements to the mission concept. Anecdotal information and process-duration metrics are presented for both the MBSE and original DBSE design efforts of SporeSat.
Improved Traceability of Mission Concept to Requirements Using Model Based Systems Engineering
NASA Technical Reports Server (NTRS)
Reil, Robin
2014-01-01
Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the traditional document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This thesis presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magics MagicDraw modeling tool. The model incorporates mission concept and requirement information from the missions original DBSE design efforts. Active dependency relationships are modeled to analyze the completeness and consistency of the requirements to the mission concept. Overall experience and methodology are presented for both the MBSE and original DBSE design efforts of SporeSat.
Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.
ERIC Educational Resources Information Center
Buchal, Ralph O.
2001-01-01
Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)
Zhang, Kun; Zheng, Jun
2013-05-01
Advantages of problem-based leaning (PBL) in teaching of Zhenjiuxue (Science of acupuncture and moxibustion) is analyzed through the feature that the curriculum has more comprehensiveness and practicalness and characteristics of the teaching team. Defects of incomplete communication among thinking pattern, cognitive contents and organization structure are presented in this article as well. It is held that things can be taken as a common point or cognitive origin of the west and the east. Therefore, bridge model of origin is designed, which could fulfill more profound expression and cognition of knowledge in ordered and dynamic organization form based on advantages of PBL, surrounded with cognitive origin and depended on impetus produced by differences between domestic and international sciences, technologies and cultures of ancient and modern societies. Thus, the level of teaching can be constantly enhanced.
Incorporating approximation error in surrogate based Bayesian inversion
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.; Li, W.; Wu, L.
2015-12-01
There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.
Bi-national cross-validation of an evidence-based conduct problem prevention model.
Porta, Carolyn M; Bloomquist, Michael L; Garcia-Huidobro, Diego; Gutiérrez, Rafael; Vega, Leticia; Balch, Rosita; Yu, Xiaohui; Cooper, Daniel K
2018-04-01
To (a) explore the preferences of Mexican parents and Spanish-speaking professionals working with migrant Latino families in Minnesota regarding the Mexican-adapted brief model versus the original conduct problems intervention and (b) identifying the potential challenges, and preferred solutions, to implementation of a conduct problems preventive intervention. The core practice elements of a conduct problems prevention program originating in the United States were adapted for prevention efforts in Mexico. Three focus groups were conducted in the United States, with Latino parents (n = 24; 2 focus groups) and professionals serving Latino families (n = 9; 1 focus group), to compare and discuss the Mexican-adapted model and the original conduct problems prevention program. Thematic analysis was conducted on the verbatim focus group transcripts in the original language spoken. Participants preferred the Mexican-adapted model. The following key areas were identified for cultural adaptation when delivering a conduct problems prevention program with Latino families: recruitment/enrollment strategies, program delivery format, and program content (i.e., child skills training, parent skills training, child-parent activities, and child-parent support). For both models, strengths, concerns, barriers, and strategies for overcoming concerns and barriers were identified. We summarize recommendations offered by participants to strengthen the effective implementation of a conduct problems prevention model with Latino families in the United States. This project demonstrates the strength in binational collaboration to critically examine cultural adaptations of evidence-based prevention programs that could be useful to diverse communities, families, and youth in other settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Tests of a habitat suitability model for black-capped chickadees
Schroeder, Richard L.
1990-01-01
The black-capped chickadee (Parus atricapillus) Habitat Suitability Index (HSI) model provides a quantitative rating of the capability of a habitat to support breeding, based on measures related to food and nest site availability. The model assumption that tree canopy volume can be predicted from measures of tree height and canopy closure was tested using data from foliage volume studies conducted in the riparian cottonwood habitat along the South Platte River in Colorado. Least absolute deviations (LAD) regression showed that canopy cover and over story tree height yielded volume predictions significantly lower than volume estimated by more direct methods. Revisions to these model relations resulted in improved predictions of foliage volume. The relation between the HSI and estimates of black-capped chickadee population densities was examined using LAD regression for both the original model and the model with the foliage volume revisions. Residuals from these models were compared to residuals from both a zero slope model and an ideal model. The fit model for the original HSI differed significantly from the ideal model, whereas the fit model for the original HSI did not differ significantly from the ideal model. However, both the fit model for the original HSI and the fit model for the revised HSI did not differ significantly from a model with a zero slope. Although further testing of the revised model is needed, its use is recommended for more realistic estimates of tree canopy volume and habitat suitability.
1989-07-21
formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The
Crown-rise and crown-length dynamics: applications to loblolly pine
Harry T. Valentine; Ralph L. Amateis; Jeffrey H. Gove; Annikki Makela
2013-01-01
The original crown-rise model estimates the average height of a crown-base in an even-aged mono-species stand of trees. We have elaborated this model to reduce bias and prediction error, and to also provide crown-base estimates for individual trees. Results for the latter agree with a theory of branch death based on resource availability and allocation.We use the...
Anatomic modeling using 3D printing: quality assurance and optimization.
Leng, Shuai; McGee, Kiaran; Morris, Jonathan; Alexander, Amy; Kuhlmann, Joel; Vrieze, Thomas; McCollough, Cynthia H; Matsumoto, Jane
2017-01-01
The purpose of this study is to provide a framework for the development of a quality assurance (QA) program for use in medical 3D printing applications. An interdisciplinary QA team was built with expertise from all aspects of 3D printing. A systematic QA approach was established to assess the accuracy and precision of each step during the 3D printing process, including: image data acquisition, segmentation and processing, and 3D printing and cleaning. Validation of printed models was performed by qualitative inspection and quantitative measurement. The latter was achieved by scanning the printed model with a high resolution CT scanner to obtain images of the printed model, which were registered to the original patient images and the distance between them was calculated on a point-by-point basis. A phantom-based QA process, with two QA phantoms, was also developed. The phantoms went through the same 3D printing process as that of the patient models to generate printed QA models. Physical measurement, fit tests, and image based measurements were performed to compare the printed 3D model to the original QA phantom, with its known size and shape, providing an end-to-end assessment of errors involved in the complete 3D printing process. Measured differences between the printed model and the original QA phantom ranged from -0.32 mm to 0.13 mm for the line pair pattern. For a radial-ulna patient model, the mean distance between the original data set and the scanned printed model was -0.12 mm (ranging from -0.57 to 0.34 mm), with a standard deviation of 0.17 mm. A comprehensive QA process from image acquisition to completed model has been developed. Such a program is essential to ensure the required accuracy of 3D printed models for medical applications.
On the structural properties of small-world networks with range-limited shortcut links
NASA Astrophysics Data System (ADS)
Jia, Tao; Kulkarni, Rahul V.
2013-12-01
We explore a new variant of Small-World Networks (SWNs), in which an additional parameter (r) sets the length scale over which shortcuts are uniformly distributed. When r=0 we have an ordered network, whereas r=1 corresponds to the original Watts-Strogatz SWN model. These limited range SWNs have a similar degree distribution and scaling properties as the original SWN model. We observe the small-world phenomenon for r≪1, indicating that global shortcuts are not necessary for the small-world effect. For limited range SWNs, the average path length changes nonmonotonically with system size, whereas for the original SWN model it increases monotonically. We propose an expression for the average path length for limited range SWNs based on numerical simulations and analytical approximations.
Sutter, Richard C; Verano, John W
2007-02-01
The purpose of this study is to test two competing models regarding the origins of Early Intermediate Period (AD 200-750) sacrificial victims from the Huacas de Moche site using the matrix correlation method. The first model posits the sacrificial victims represent local elites who lost competitions in ritual battles with one another, while the other model suggests the victims were nonlocal warriors captured during warfare with nearby polities. We estimate biodistances for sacrificial victims from Huaca de la Luna Plaza 3C (AD 300-550) with eight previously reported samples from the north coast of Peru using both the mean measure of divergence (MMD) and Mahalanobis' distance (d2). Hypothetical matrices are developed based upon the assumptions of each of the two competing models regarding the origins of Moche sacrificial victims. When the MMD matrix is compared to the two hypothetical matrices using a partial-Mantel test (Smouse et al.: Syst Zool 35 (1986) 627-632), the ritual combat model (i.e. local origins) has a low and nonsignificant correlation (r = 0.134, P = 0.163), while the nonlocal origins model is highly correlated and significant (r = 0.688, P = 0.001). Comparisons of the d2 results and the two hypothetical matrices also produced low and nonsignificant correlation for the ritual combat model (r = 0.210, P = 0.212), while producing a higher and statistically significant result with the nonlocal origins model (r = 0.676, P = 0.002). We suggest that the Moche sacrificial victims represent nonlocal warriors captured in territorial combat with nearby competing polities. Copyright 2006 Wiley-Liss, Inc.
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
We developed a numerical model to predict chemical concentrations in indoor environments resulting from soil vapor intrusion and volatilization from groundwater. The model, which integrates new and existing algorithms for chemical fate and transport, was originally...
Periodic mass extinctions and the Planet X model reconsidered
NASA Astrophysics Data System (ADS)
Whitmire, Daniel P.
2016-01-01
The 27 Myr period in the fossil extinction record has been confirmed in modern data bases dating back 500 Myr, which is twice the time interval of the original analysis from 30 years ago. The surprising regularity of this period has been used to reject the Nemesis model. A second model based on the Sun's vertical Galactic oscillations has been challenged on the basis of an inconsistency in period and phasing. The third astronomical model originally proposed to explain the periodicity is the Planet X model in which the period is associated with the perihelion precession of the inclined orbit of a trans-Neptunian planet. Recently, and unrelated to mass extinctions, a trans-Neptunian super-Earth planet has been proposed to explain the observation that the inner Oort cloud objects Sedna and 2012VP113 have perihelia that lie near the ecliptic plane. In this Letter, we reconsider the Planet X model in light of the confluence of the modern palaeontological and outer Solar system dynamical evidence.
Proposed evaluation framework for assessing operator performance with multisensor displays
NASA Technical Reports Server (NTRS)
Foyle, David C.
1992-01-01
Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Bottom, William P
2009-01-01
Conventional history of the predominant, research-based model of business education (RBM) traces its origins to programs initiated by the Ford Foundation after World War II. This paper maps the elite network responsible for developing behavioral science and the Ford Foundation agenda. Archival records of the actions taken by central nodes in the network permit identification of the original vision statement for the model. Analysis also permits tracking progress toward realizing that vision over several decades. Behavioral science was married to business education from the earliest stages of development. The RBM was a fundamental promise made by advocates for social science funding. Appraisals of the model and recommendations for reform must address its full history, not the partial, distorted view that is the conventional account. Implications of this more complete history for business education and for behavioral theory are considered.
New approaches in agent-based modeling of complex financial systems
NASA Astrophysics Data System (ADS)
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2017-12-01
Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
NASA Astrophysics Data System (ADS)
Tengattini, Alessandro; Das, Arghya; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai
2014-10-01
This is the first of two papers introducing a novel thermomechanical continuum constitutive model for cemented granular materials. Here, we establish the theoretical foundations of the model, and highlight its novelties. At the limit of no cement, the model is fully consistent with the original Breakage Mechanics model. An essential ingredient of the model is the use of measurable and micro-mechanics based internal variables, describing the evolution of the dominant inelastic processes. This imposes a link between the macroscopic mechanical behavior and the statistically averaged evolution of the microstructure. As a consequence this model requires only a few physically identifiable parameters, including those of the original breakage model and new ones describing the cement: its volume fraction, its critical damage energy and bulk stiffness, and the cohesion.
KINEROS2 – AGWA Suite of Modeling Tools
KINEROS2 (K2) originated in the 1960s as a distributed event-based rainfall-runoff erosion model abstracting the watershed as a cascade of overland flow elements contributing to channel model elements. Development and improvement of K2 has continued for a variety of projects and ...
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
Statistical Compression for Climate Model Output
NASA Astrophysics Data System (ADS)
Hammerling, D.; Guinness, J.; Soh, Y. J.
2017-12-01
Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.
The origin of the vertebrate skeleton
NASA Astrophysics Data System (ADS)
Pivar, Stuart
2011-01-01
The anatomy of the human and other vertebrates has been well described since the days of Leonardo da Vinci and Vesalius. The causative origin of the configuration of the bones and of their shapes and forms has been addressed over the ensuing centuries by such outstanding investigators as Goethe, Von Baer, Gegenbauer, Wilhelm His and D'Arcy Thompson, who sought to apply mechanical principles to morphogenesis. However, no coherent causative model of morphogenesis has ever been presented. This paper presents a causative model for the origin of the vertebrate skeleton, based on the premise that the body is a mosaic enlargement of self-organized patterns engrained in the membrane of the egg cell. Drawings illustrate the proposed hypothetical origin of membrane patterning and the changes in the hydrostatic equilibrium of the cytoplasm that cause topographical deformations resulting in the vertebrate body form.
Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey
NASA Astrophysics Data System (ADS)
Caditz, David M.
2017-12-01
Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68
The Atomic Origin of the Reflection Law
ERIC Educational Resources Information Center
Prytz, Kjell
2016-01-01
It will be demonstrated how the reflection law may be derived on an atomic basis using the plane wave approximation together with Huygens' principle. The model utilized is based on the electric dipole character of matter originating from its molecular constituents. This approach is not new but has, since it was first introduced by Ewald and Oseen…
ERIC Educational Resources Information Center
Schaal, David W.
2012-01-01
This article presents an introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation," by Edward Taub and his colleagues (Taub, 2012). Based on extensive experimentation with animal models of peripheral nerve injury, Taub and colleagues have created an approach to overcoming…
Löb, D.; Lengert, N.; Chagin, V. O.; Reinhart, M.; Casas-Delucchi, C. S.; Cardoso, M. C.; Drossel, B.
2016-01-01
DNA replication dynamics in cells from higher eukaryotes follows very complex but highly efficient mechanisms. However, the principles behind initiation of potential replication origins and emergence of typical patterns of nuclear replication sites remain unclear. Here, we propose a comprehensive model of DNA replication in human cells that is based on stochastic, proximity-induced replication initiation. Critical model features are: spontaneous stochastic firing of individual origins in euchromatin and facultative heterochromatin, inhibition of firing at distances below the size of chromatin loops and a domino-like effect by which replication forks induce firing of nearby origins. The model reproduces the empirical temporal and chromatin-related properties of DNA replication in human cells. We advance the one-dimensional DNA replication model to a spatial model by taking into account chromatin folding in the nucleus, and we are able to reproduce the spatial and temporal characteristics of the replication foci distribution throughout S-phase. PMID:27052359
ERIC Educational Resources Information Center
Stratford, Steven J.; Krajeik, Joseph; Soloway, Elliot
This paper presents the results of a study of the cognitive strategies in which ninth-grade science students engaged as they used a learner-centered dynamic modeling tool (called Model-It) to make original models based upon stream ecosystem scenarios. The research questions were: (1) In what Cognitive Strategies for Modeling (analyzing, reasoning,…
A genetic-algorithm-based remnant grey prediction model for energy demand forecasting.
Hu, Yi-Chung
2017-01-01
Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants.
A genetic-algorithm-based remnant grey prediction model for energy demand forecasting
2017-01-01
Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants. PMID:28981548
Pragmatics fragmented: the factor structure of the Dutch children's communication checklist (CCC).
Geurts, Hilde M; Hartman, Catharina; Verté, Sylvie; Oosterlaan, Jaap; Roeyers, Herbert; Sergeant, Joseph A
2009-01-01
A number of disorders are associated with pragmatic difficulties. Instruments that can make subdivisions within the larger construct of pragmatics could be important tools for disentangling profiles of pragmatic difficulty in different disorders. The deficits underlying the observed pragmatic difficulties may be different for different disorders. To study the construct validity of a pragmatic language questionnaire. The construct of pragmatics is studied by applying exploratory factor analysis (EFA) and confirmatory factor analysis to the parent version of the Dutch Children's Communication Checklist (CCC; Bishop 1998 ). Parent ratings of 1589 typically developing children and 481 children with a clinical diagnosis were collected. Four different factor models derived from the original CCC scales and five different factor models based on EFA were compared with each other. The models were cross-validated. The EFA-derived models were substantively different from the originally proposed CCC factor structure. EFA models gave a slightly better fit than the models based on the original CCC scales, though neither provided a good fit to the parent data. Coherence seemed to be part of language form and not of pragmatics, which is in line with the adaptation of the CCC, the CCC-2 (Bishop 2003 ). Most pragmatic items clustered together in one factor and these pragmatic items also clustered with items related to social relationships and specific interests. The nine scales of the original CCC do not reflect the underlying factor structure. Therefore, scale composition may be improved on and scores on subscale level need to be interpreted cautiously. Therefore, in interpreting the CCC profiles, the overall measure might be more informative than the postulated subscales as more information is needed to determine which constructs the suggested subscales are actually measuring.
The KINEROS2 – AGWA Suite of modeling tools
USDA-ARS?s Scientific Manuscript database
KINEROS2 (K2) originated in the 1960s as a distributed event-based rainfall-runoff erosion model abstracting the watershed as a cascade of overland flow elements contributing to channel model elements. Development and improvement of K2 has continued for a variety of projects and purposes resulting i...
Population-based human exposure models predict the distribution of personal exposures to pollutants of outdoor origin using a variety of inputs, including: air pollution concentrations; human activity patterns, such as the amount of time spent outdoors vs. indoors, commuting, wal...
The AGWA - KINEROS2 Suite of Modeling Tools in the Context of Watershed Services Valuation
KINEROS originated in the 1970’s as a distributed event-based rainfall-runoff erosion model. A unique feature at that time was its interactive coupling of a finite difference approximation of the kinematic overland flow equations to the Smith-Parlange infiltration model. Developm...
NASA Astrophysics Data System (ADS)
Yu, Bing; Shu, Wenjun; Cao, Can
2018-05-01
A novel modeling method for aircraft engine using nonlinear autoregressive exogenous (NARX) models based on wavelet neural networks is proposed. The identification principle and process based on wavelet neural networks are studied, and the modeling scheme based on NARX is proposed. Then, the time series data sets from three types of aircraft engines are utilized to build the corresponding NARX models, and these NARX models are validated by the simulation. The results show that all the best NARX models can capture the original aircraft engine's dynamic characteristic well with the high accuracy. For every type of engine, the relative identification errors of its best NARX model and the component level model are no more than 3.5 % and most of them are within 1 %.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
Bovine origin Staphylococcus aureus: A new zoonotic agent?
Rao, Relangi Tulasi; Jayakumar, Kannan; Kumar, Pavitra
2017-10-01
The study aimed to assess the nature of animal origin Staphylococcus aureus strains. The study has zoonotic importance and aimed to compare virulence between two different hosts, i.e., bovine and ovine origin. Conventional polymerase chain reaction-based methods used for the characterization of S. aureus strains and chick embryo model employed for the assessment of virulence capacity of strains. All statistical tests carried on R program, version 3.0.4. After initial screening and molecular characterization of the prevalence of S. aureus found to be 42.62% in bovine origin samples and 28.35% among ovine origin samples. Meanwhile, the methicillin-resistant S. aureus prevalence is found to be meager in both the hosts. Among the samples, only 6.8% isolates tested positive for methicillin resistance. The biofilm formation quantified and the variation compared among the host. A Welch two-sample t -test found to be statistically significant, t=2.3179, df=28.103, and p=0.02795. Chicken embryo model found effective to test the pathogenicity of the strains. The study helped to conclude healthy bovines can act as S. aureus reservoirs. Bovine origin S. aureus strains are more virulent than ovine origin strains. Bovine origin strains have high probability to become zoonotic pathogen. Further, gene knock out studies may be conducted to conclude zoonocity of the bovine origin strains.
Ariyama, Kaoru; Horita, Hiroshi; Yasui, Akemi
2004-09-22
The composition of concentration ratios of 19 inorganic elements to Mg (hereinafter referred to as 19-element/Mg composition) was applied to chemometric techniques to determine the geographic origin (Japan or China) of Welsh onions (Allium fistulosum L.). Using a composition of element ratios has the advantage of simplified sample preparation, and it was possible to determine the geographic origin of a Welsh onion within 2 days. The classical technique based on 20 element concentrations was also used along with the new simpler one based on 19 elements/Mg in order to validate the new technique. Twenty elements, Na, P, K, Ca, Mg, Mn, Fe, Cu, Zn, Sr, Ba, Co, Ni, Rb, Mo, Cd, Cs, La, Ce, and Tl, in 244 Welsh onion samples were analyzed by flame atomic absorption spectroscopy, inductively coupled plasma atomic emission spectrometry, and inductively coupled plasma mass spectrometry. Linear discriminant analysis (LDA) on 20-element concentrations and 19-element/Mg composition was applied to these analytical data, and soft independent modeling of class analogy (SIMCA) on 19-element/Mg composition was applied to these analytical data. The results showed that techniques based on 19-element/Mg composition were effective. LDA, based on 19-element/Mg composition for classification of samples from Japan and from Shandong, Shanghai, and Fujian in China, classified 101 samples used for modeling 97% correctly and predicted another 119 samples excluding 24 nonauthentic samples 93% correctly. In discriminations by 10 times of SIMCA based on 19-element/Mg composition modeled using 101 samples, 220 samples from known production areas including samples used for modeling and excluding 24 nonauthentic samples were predicted 92% correctly.
Hard and soft acids and bases: atoms and atomic ions.
Reed, James L
2008-07-07
The structural origin of hard-soft behavior in atomic acids and bases has been explored using a simple orbital model. The Pearson principle of hard and soft acids and bases has been taken to be the defining statement about hard-soft behavior and as a definition of chemical hardness. There are a number of conditions that are imposed on any candidate structure and associated property by the Pearson principle, which have been exploited. The Pearson principle itself has been used to generate a thermodynamically based scale of relative hardness and softness for acids and bases (operational chemical hardness), and a modified Slater model has been used to discern the electronic origin of hard-soft behavior. Whereas chemical hardness is a chemical property of an acid or base and the operational chemical hardness is an experimental measure of it, the absolute hardness is a physical property of an atom or molecule. A critical examination of chemical hardness, which has been based on a more rigorous application of the Pearson principle and the availability of quantitative measures of chemical hardness, suggests that the origin of hard-soft behavior for both acids and bases resides in the relaxation of the electrons not undergoing transfer during the acid-base interaction. Furthermore, the results suggest that the absolute hardness should not be taken as synonymous with chemical hardness but that the relationship is somewhat more complex. Finally, this work provides additional groundwork for a better understanding of chemical hardness that will inform the understanding of hardness in molecules.
GrayStar: Web-based pedagogical stellar modeling
NASA Astrophysics Data System (ADS)
Short, C. Ian
2017-01-01
GrayStar is a web-based pedagogical stellar model. It approximates stellar atmospheric and spectral line modeling in JavaScript with visualization in HTML. It is suitable for a wide range of education and public outreach levels depending on which optional plots and print-outs are turned on. All plots and renderings are pure basic HTML and the plotting module contains original HTML procedures for automatically scaling and graduating x- and y-axes.
ERIC Educational Resources Information Center
Schweizer, Karl
2006-01-01
A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
An original traffic additional emission model and numerical simulation on a signalized road
NASA Astrophysics Data System (ADS)
Zhu, Wen-Xing; Zhang, Jing-Yu
2017-02-01
Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.
Yu, Lu; Xie, Dong; Shek, Daniel T. L.
2012-01-01
This study examined the factor structure of a scale based on the four-dimensional gender identity model (Egan and Perry, 2001) in 726 Chinese elementary school students. Exploratory factor analyses suggested a three-factor model, two of which corresponded to “Felt Pressure” and “Intergroup Bias” in the original model. The third factor “Gender Compatibility” appeared to be a combination of “Gender Typicality” and “Gender Contentment” in the original model. Follow-up confirmatory factor analysis (CFA) indicated that, relative to the initial four-factor structure, the three-factor model fits the current Chinese sample better. These results are discussed in light of cross-cultural similarities and differences in development of gender identity. PMID:22701363
Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES
NASA Astrophysics Data System (ADS)
Aniszewski, Wojciech
2016-12-01
In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.
Diamantides, N D; Constantinou, S T
1989-07-01
"A model is presented of international migration that is based on the concept of a pool of potential emigrants at the origin created by push-pull forces and by the establishment of information feedback between origin and destination. The forces can be economic, political, or both, and are analytically expressed by the 'mediating factor'. The model is macrodynamic in nature and provides both for the main secular component of the migratory flow and for transient components caused by extraordinary events. The model is expressed in a Bernoulli-type differential equation through which quantitative weights can be derived for each of the operating causes. Out-migration from the Republic of Cyprus is used to test the tenets of the model." excerpt
Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G
2010-06-01
We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
BehavePlus fire modeling system, version 5.0: Variables
Patricia L. Andrews
2009-01-01
This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...
Chaos in a dynamic model of traffic flows in an origin-destination network.
Zhang, Xiaoyan; Jarrett, David F.
1998-06-01
In this paper we investigate the dynamic behavior of road traffic flows in an area represented by an origin-destination (O-D) network. Probably the most widely used model for estimating the distribution of O-D flows is the gravity model, [J. de D. Ortuzar and L. G. Willumsen, Modelling Transport (Wiley, New York, 1990)] which originated from an analogy with Newton's gravitational law. The conventional gravity model, however, is static. The investigation in this paper is based on a dynamic version of the gravity model proposed by Dendrinos and Sonis by modifying the conventional gravity model [D. S. Dendrinos and M. Sonis, Chaos and Social-Spatial Dynamics (Springer-Verlag, Berlin, 1990)]. The dynamic model describes the variations of O-D flows over discrete-time periods, such as each day, each week, and so on. It is shown that when the dimension of the system is one or two, the O-D flow pattern either approaches an equilibrium or oscillates. When the dimension is higher, the behavior found in the model includes equilibria, oscillations, periodic doubling, and chaos. Chaotic attractors are characterized by (positive) Liapunov exponents and fractal dimensions.(c) 1998 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Darko, Deborah; Adjei, Kwaku A.; Appiah-Adjei, Emmanuel K.; Odai, Samuel N.; Obuobie, Emmanuel; Asmah, Ruby
2018-06-01
The extent to which statistical bias-adjusted outputs of two regional climate models alter the projected change signals for the mean (and extreme) rainfall and temperature over the Volta Basin is evaluated. The outputs from two regional climate models in the Coordinated Regional Climate Downscaling Experiment for Africa (CORDEX-Africa) are bias adjusted using the quantile mapping technique. Annual maxima rainfall and temperature with their 10- and 20-year return values for the present (1981-2010) and future (2051-2080) climates are estimated using extreme value analyses. Moderate extremes are evaluated using extreme indices (viz. percentile-based, duration-based, and intensity-based). Bias adjustment of the original (bias-unadjusted) models improves the reproduction of mean rainfall and temperature for the present climate. However, the bias-adjusted models poorly reproduce the 10- and 20-year return values for rainfall and maximum temperature whereas the extreme indices are reproduced satisfactorily for the present climate. Consequently, projected changes in rainfall and temperature extremes were weak. The bias adjustment results in the reduction of the change signals for the mean rainfall while the mean temperature signals are rather magnified. The projected changes for the original mean climate and extremes are not conserved after bias adjustment with the exception of duration-based extreme indices.
Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos
2018-04-15
Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.
Information-Flow-Based Access Control for Web Browsers
NASA Astrophysics Data System (ADS)
Yoshihama, Sachiko; Tateishi, Takaaki; Tabuchi, Naoshi; Matsumoto, Tsutomu
The emergence of Web 2.0 technologies such as Ajax and Mashup has revealed the weakness of the same-origin policy[1], the current de facto standard for the Web browser security model. We propose a new browser security model to allow fine-grained access control in the client-side Web applications for secure mashup and user-generated contents. We propose a browser security model that is based on information-flow-based access control (IBAC) to overcome the dynamic nature of the client-side Web applications and to accurately determine the privilege of scripts in the event-driven programming model.
NASA Technical Reports Server (NTRS)
Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.
2015-01-01
This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.
Enhanced simulation software for rocket turbopump, turbulent, annular liquid seals
NASA Technical Reports Server (NTRS)
Padavala, Satya; Palazzolo, Alan
1994-01-01
One of the main objectives of this work is to develop a new dynamic analysis for liquid annular seals with arbitrary profile and to analyze a general distorted interstage seal of the space shuttle main engine high pressure oxygen turbopump (SSME-ATD-HPOTP). The dynamic analysis developed is based on a method originally proposed by Nelson and Nguyen. A simpler scheme based on cubic splines is found to be computationally more efficient and has better convergence properties at higher eccentricities. The first order solution of the original analysis is modified by including a more exact solution that takes into account the variation of perturbed variables along the circumference. A new set of equations for dynamic analysis are derived based on this more general model. A unified solution procedure that is valid for both Moody's and Hirs' friction models is presented. Dynamic analysis is developed for three different models: constant properties, variable properties, and thermal effects with variable properties. Arbitrarily varying seal profiles in both axial and circumferential directions are considered. An example case of an elliptical seal with varying degrees of axial curvature is analyzed in detail. A case study based on predicted clearances of an interstage seal of the SSME-ATD-HPOTP is presented. Dynamic coefficients based on external specified load are introduced to analyze seals that support a preload. The other objective of this work is to study the effect of large rotor displacements of SSME-ATD-HPOTP on the dynamics of the annular seal and the resulting transient motion. One task is to identify the magnitude of motion of the rotor about the centered position and establish limits of effectiveness of using current linear models. This task is accomplished by solving the bulk flow model seal governing equations directly for transient seal forces for any given type of motion, including motion with large eccentricities. Based on the above study, an equivalence is established between linearized coefficients based transient motion and the same motion as predicted by the original governing equations. An innovative method is developed to model nonlinearities in an annular seal based on dynamic coefficients computed at various static eccentricities. This method is thoroughly tested for various types of transient motion using bulk flow model results as a benchmark.
Finch, Kristen; Espinoza, Edgard; Jones, F Andrew; Cronn, Richard
2017-05-01
We investigated whether wood metabolite profiles from direct analysis in real time (time-of-flight) mass spectrometry (DART-TOFMS) could be used to determine the geographic origin of Douglas-fir wood cores originating from two regions in western Oregon, USA. Three annual ring mass spectra were obtained from 188 adult Douglas-fir trees, and these were analyzed using random forest models to determine whether samples could be classified to geographic origin, growth year, or growth year and geographic origin. Specific wood molecules that contributed to geographic discrimination were identified. Douglas-fir mass spectra could be differentiated into two geographic classes with an accuracy between 70% and 76%. Classification models could not accurately classify sample mass spectra based on growth year. Thirty-two molecules were identified as key for classifying western Oregon Douglas-fir wood cores to geographic origin. DART-TOFMS is capable of detecting minute but regionally informative differences in wood molecules over a small geographic scale, and these differences made it possible to predict the geographic origin of Douglas-fir wood with moderate accuracy. Studies involving DART-TOFMS, alone and in combination with other technologies, will be relevant for identifying the geographic origin of illegally harvested wood.
Origin of Money: Dynamic Duality Between Necessity and Unnecessity
NASA Astrophysics Data System (ADS)
Tauchi, Yuka; Kamiura, Moto; Haruna, Taichi; Gunji, Yukio-Pegio
2008-10-01
We propose a mathematical model of economic agents to study origin of money. This multi-agent model is based on commodity theory of money, which says that a commodity used as money emerges from barter transaction. Each agent has a different value system which is given by a Heyting algebra, and exchanges one's commodities based on the value system. In each value system, necessity and unnecessity of commodities are expressed by some elements and their compliments on a Heyting Algebra. Moreover, the concept of the compliment is extended. Consequently, the duality of the necessity-unnecessity is weakened, and the exchanges of the commodities are promoted. The commodities which keeps being exchanged for a long time can correspond to money.
Original data preprocessor for Femap/Nastran
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra
2016-12-01
Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.
Credit WCT. Original 2Y4" x 2Y4" color negative is housed ...
Credit WCT. Original 2-Y4" x 2-Y4" color negative is housed in the JPL Photography Laboratory, Pasadena, California. JPL staff members Harold Anderson and John Morrow cast grain from the 1-gallon BakerPerkins model 4-PU mixer. A 1-pint Baker-Perkins model 2-PX mixer stands to the left in this view (JPL negative no. JPL-10295BC, 27 January 1989) - Jet Propulsion Laboratory Edwards Facility, Mixer & Casting Building, Edwards Air Force Base, Boron, Kern County, CA
Annual Geocenter Motion from Space Geodesy and Models
NASA Astrophysics Data System (ADS)
Ries, J. C.
2013-12-01
Ideally, the origin of the terrestrial reference frame and the center of mass of the Earth are always coincident. By construction, the origin of the reference frame is coincident with the mean Earth center of mass, averaged over the time span of the satellite laser ranging (SLR) observations used in the reference frame solution, within some level of uncertainty. At shorter time scales, tidal and non-tidal mass variations result in an offset between the origin and geocenter, called geocenter motion. Currently, there is a conventional model for the tidally-coherent diurnal and semi-diurnal geocenter motion, but there is no model for the non-tidal annual variation. This annual motion reflects the largest-scale mass redistribution in the Earth system, so it essential to observe it for a complete description of the total mass transport. Failing to model it can also cause false signals in geodetic products such as sea height observations from satellite altimeters. In this paper, a variety of estimates for the annual geocenter motion are presented based on several different geodetic techniques and models, and a ';consensus' model from SLR is suggested.
A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.
2015-11-15
Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less
Porting marine ecosystem model spin-up using transport matrices to GPUs
NASA Astrophysics Data System (ADS)
Siewertsen, E.; Piwonski, J.; Slawig, T.
2013-01-01
We have ported an implementation of the spin-up for marine ecosystem models based on transport matrices to graphics processing units (GPUs). The original implementation was designed for distributed-memory architectures and uses the Portable, Extensible Toolkit for Scientific Computation (PETSc) library that is based on the Message Passing Interface (MPI) standard. The spin-up computes a steady seasonal cycle of ecosystem tracers with climatological ocean circulation data as forcing. Since the transport is linear with respect to the tracers, the resulting operator is represented by matrices. Each iteration of the spin-up involves two matrix-vector multiplications and the evaluation of the used biogeochemical model. The original code was written in C and Fortran. On the GPU, we use the Compute Unified Device Architecture (CUDA) standard, a customized version of PETSc and a commercial CUDA Fortran compiler. We describe the extensions to PETSc and the modifications of the original C and Fortran codes that had to be done. Here we make use of freely available libraries for the GPU. We analyze the computational effort of the main parts of the spin-up for two exemplar ecosystem models and compare the overall computational time to those necessary on different CPUs. The results show that a consumer GPU can compete with a significant number of cluster CPUs without further code optimization.
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou
2017-07-01
In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.
Vicher: A Virtual Reality Based Educational Module for Chemical Reaction Engineering.
ERIC Educational Resources Information Center
Bell, John T.; Fogler, H. Scott
1996-01-01
A virtual reality application for undergraduate chemical kinetics and reactor design education, Vicher (Virtual Chemical Reaction Model) was originally designed to simulate a portion of a modern chemical plant. Vicher now consists of two programs: Vicher I that models catalyst deactivation and Vicher II that models nonisothermal effects in…
A New Transcendence Model of Identity Construction
ERIC Educational Resources Information Center
Elliott, Christopher L. Wilcox
2012-01-01
What does it mean for college men to be authentic? How can we support their efforts to transcend their own immediate needs? And what roles do trusted others play in one's construction of identity? This article describes a preliminary new theoretical model--a Transcendence Model of Identity Construction--based on original research studying…
Introductory Biology Students' Conceptual Models and Explanations of the Origin of Variation
ERIC Educational Resources Information Center
Bray Speth, Elena; Shaw, Neil; Momsen, Jennifer; Reinagel, Adam; Le, Paul; Taqieddin, Ranya; Long, Tammy
2014-01-01
Mutation is the key molecular mechanism generating phenotypic variation, which is the basis for evolution. In an introductory biology course, we used a model-based pedagogy that enabled students to integrate their understanding of genetics and evolution within multiple case studies. We used student-generated conceptual models to assess…
The Sanctuary Model of Trauma-Informed Organizational Change
ERIC Educational Resources Information Center
Bloom, Sandra L.; Sreedhar, Sarah Yanosy
2008-01-01
This article features the Sanctuary Model[R], a trauma-informed method for creating or changing an organizational culture. Although the model is based on trauma theory, its tenets have application in working with children and adults across a wide diagnostic spectrum. Originally developed in a short-term, acute inpatient psychiatric setting for…
Wang, Fuyu; Xu, Bainan; Sun, Zhenghui; Liu, Lei; Wu, Chen; Zhang, Xiaojun
2012-10-01
To establish an individualized fluid-solid coupled model of intracranial aneurysms based on computed tomography angiography (CTA) image data. The original Dicom format image data from a patient with an intracranial aneurysm were imported into Mimics software to construct the 3D model. The fluid-solid coupled model was simulated with ANSYS and CFX software, and the sensitivity of the model was analyzed. The difference between the rigid model and fluid-solid coupled model was also compared. The fluid-solid coupled model of intracranial aneurysm was established successfully, which allowed direct simulation of the blood flow of the intracranial aneurysm and the deformation of the solid wall. The pressure field, stress field, and distribution of Von Mises stress and deformation of the aneurysm could be exported from the model. A small Young's modulus led to an obvious deformation of the vascular wall, and the walls with greater thicknesses had smaller deformations. The rigid model and the fluid-solid coupled model showed more differences in the wall shear stress and blood flow velocity than in pressure. The fluid-solid coupled model more accurately represents the actual condition of the intracranial aneurysm than the rigid model. The results of numerical simulation with the model are reliable to study the origin, growth and rupture of the aneurysms.
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Value of eddy-covariance data for individual-based, forest gap models
NASA Astrophysics Data System (ADS)
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2014-05-01
Individual-based forest gap models simulate tree growth and carbon fluxes on large time scales. They are a well established tool to predict forest dynamics and successions. However, the effect of climatic variables on processes of such individual-based models is uncertain (e.g. the effect of temperature or soil moisture on the gross primary production (GPP)). Commonly, functional relationships and parameter values that describe the effect of climate variables on the model processes are gathered from various vegetation models of different spatial scales. Though, their accuracies and parameter values have not been validated for the specific model scales of individual-based forest gap models. In this study, we address this uncertainty by linking Eddy-covariance (EC) data and a forest gap model. The forest gap model FORMIND is applied on the Norwegian spruce monoculture forest at Wetzstein in Thuringia, Germany for the years 2003-2008. The original parameterizations of climatic functions are adapted according to the EC-data. The time step of the model is reduced to one day in order to adapt to the high resolution EC-data. The FORMIND model uses functional relationships on an individual level, whereas the EC-method measures eco-physiological responses at the ecosystem level. However, we assume that in homogeneous stands as in our study, functional relationships for both methods are comparable. The model is then validated at the spruce forest Waldstein, Germany. Results show that the functional relationships used in the model, are similar to those observed with the EC-method. The temperature reduction curve is well reflected in the EC-data, though parameter values differ from the originally expected values. For example at the freezing point, the observed GPP is 30% higher than predicted by the forest gap model. The response of observed GPP to soil moisture shows that the permanent wilting point is 7 vol-% lower than the value derived from the literature. The light response curve, integrated over the canopy and the forest stand, is underestimated compared to the measured data. The EC-method measures a yearly carbon balance of 13 mol(CO2)m-2 for the Wetzstein site. The model with the original parameterization overestimates the yearly carbon balance by nearly 5 mol(CO2)m-2 while the model with an EC-based parameterization fits the measured data very well. The parameter values derived from EC-data are applied on the spruce forest Waldstein and clearly improve estimates of the carbon balance.
Validation of the NASA Dryden X-31 simulation and evaluation of mechanization techniques
NASA Technical Reports Server (NTRS)
Dickes, Edward; Kay, Jacob; Ralston, John
1994-01-01
This paper shall discuss the evaluation of the original Dryden X-31 aerodynamic math model, processes involved in the justification and creation of the modified data base, and comparison time history results of the model response with flight test.
Sources and Trends of Nitrogen Loading to New England Estuaries
A database of nitrogen (N) loading components to estuaries of the conterminous United States has been developed through application of regional SPARROW models. The original SPARROW models predict average detrended loads by source based on average flow conditions and 2002 source t...
Status and Trends of Nitrogen Loads to Estuaries of the Conterminous U.S.
We applied regional SPARROW (SPAtially Referenced Regressions On Watershed attributes) models to estimate status and trends of potential nitrogen loads to estuaries of the conterminous United States. The original SPARROW models predict average detrended loads by source based on ...
Origin of band gap bowing in dilute GaAs1-xNx and GaP1-xNx alloys: A real-space view
NASA Astrophysics Data System (ADS)
Virkkala, Ville; Havu, Ville; Tuomisto, Filip; Puska, Martti J.
2013-07-01
The origin of the band gap bowing in dilute nitrogen doped gallium based III-V semiconductors is largely debated. In this paper we show the dilute GaAs1-xNx and GaP1-xNx as representative examples that the nitrogen-induced states close to the conduction band minimum propagate along the zigzag chains on the {110} planes. Thereby states originating from different N atoms interact with each other resulting in broadening of the nitrogen-induced states which narrows the band gap. Our modeling based on ab initio theoretical calculations explains the experimentally observed N concentration dependent band gap narrowing both qualitatively and quantitatively.
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
Extended cooperative control synthesis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1994-01-01
This paper reports on research for extending the Cooperative Control Synthesis methodology to include a more accurate modeling of the pilot's controller dynamics. Cooperative Control Synthesis (CCS) is a methodology that addresses the problem of how to design control laws for piloted, high-order, multivariate systems and/or non-conventional dynamic configurations in the absence of flying qualities specifications. This is accomplished by emphasizing the parallel structure inherent in any pilot-controlled, augmented vehicle. The original CCS methodology is extended to include the Modified Optimal Control Model (MOCM), which is based upon the optimal control model of the human operator developed by Kleinman, Baron, and Levison in 1970. This model provides a modeling of the pilot's compensation dynamics that is more accurate than the simplified pilot dynamic representation currently in the CCS methodology. Inclusion of the MOCM into the CCS also enables the modeling of pilot-observation perception thresholds and pilot-observation attention allocation affects. This Extended Cooperative Control Synthesis (ECCS) allows for the direct calculation of pilot and system open- and closed-loop transfer functions in pole/zero form and is readily implemented in current software capable of analysis and design for dynamic systems. Example results based upon synthesizing an augmentation control law for an acceleration command system in a compensatory tracking task using the ECCS are compared with a similar synthesis performed by using the original CCS methodology. The ECCS is shown to provide augmentation control laws that yield more favorable, predicted closed-loop flying qualities and tracking performance than those synthesized using the original CCS methodology.
Waters, Theodore E. A.; Ruiz, Sarah K.; Roisman, Glenn I.
2016-01-01
Increasing evidence suggests that attachment representations take at least two forms—a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample, and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Further, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. PMID:27302650
Prediction of Protein Structure by Template-Based Modeling Combined with the UNRES Force Field.
Krupa, Paweł; Mozolewska, Magdalena A; Joo, Keehyoung; Lee, Jooyoung; Czaplewski, Cezary; Liwo, Adam
2015-06-22
A new approach to the prediction of protein structures that uses distance and backbone virtual-bond dihedral angle restraints derived from template-based models and simulations with the united residue (UNRES) force field is proposed. The approach combines the accuracy and reliability of template-based methods for the segments of the target sequence with high similarity to those having known structures with the ability of UNRES to pack the domains correctly. Multiplexed replica-exchange molecular dynamics with restraints derived from template-based models of a given target, in which each restraint is weighted according to the accuracy of the prediction of the corresponding section of the molecule, is used to search the conformational space, and the weighted histogram analysis method and cluster analysis are applied to determine the families of the most probable conformations, from which candidate predictions are selected. To test the capability of the method to recover template-based models from restraints, five single-domain proteins with structures that have been well-predicted by template-based methods were used; it was found that the resulting structures were of the same quality as the best of the original models. To assess whether the new approach can improve template-based predictions with incorrectly predicted domain packing, four such targets were selected from the CASP10 targets; for three of them the new approach resulted in significantly better predictions compared with the original template-based models. The new approach can be used to predict the structures of proteins for which good templates can be found for sections of the sequence or an overall good template can be found for the entire sequence but the prediction quality is remarkably weaker in putative domain-linker regions.
Bovine origin Staphylococcus aureus: A new zoonotic agent?
Rao, Relangi Tulasi; Jayakumar, Kannan; Kumar, Pavitra
2017-01-01
Aim: The study aimed to assess the nature of animal origin Staphylococcus aureus strains. The study has zoonotic importance and aimed to compare virulence between two different hosts, i.e., bovine and ovine origin. Materials and Methods: Conventional polymerase chain reaction-based methods used for the characterization of S. aureus strains and chick embryo model employed for the assessment of virulence capacity of strains. All statistical tests carried on R program, version 3.0.4. Results: After initial screening and molecular characterization of the prevalence of S. aureus found to be 42.62% in bovine origin samples and 28.35% among ovine origin samples. Meanwhile, the methicillin-resistant S. aureus prevalence is found to be meager in both the hosts. Among the samples, only 6.8% isolates tested positive for methicillin resistance. The biofilm formation quantified and the variation compared among the host. A Welch two-sample t-test found to be statistically significant, t=2.3179, df=28.103, and p=0.02795. Chicken embryo model found effective to test the pathogenicity of the strains. Conclusion: The study helped to conclude healthy bovines can act as S. aureus reservoirs. Bovine origin S. aureus strains are more virulent than ovine origin strains. Bovine origin strains have high probability to become zoonotic pathogen. Further, gene knock out studies may be conducted to conclude zoonocity of the bovine origin strains. PMID:29184376
A Novel Antibody Humanization Method Based on Epitopes Scanning and Molecular Dynamics Simulation
Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen
2013-01-01
1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization. PMID:24278299
Dececchi, T Alexander; Larsson, Hans C E
2011-01-01
The origin of avian flight is a classic macroevolutionary transition with research spanning over a century. Two competing models explaining this locomotory transition have been discussed for decades: ground up versus trees down. Although it is impossible to directly test either of these theories, it is possible to test one of the requirements for the trees-down model, that of an arboreal paravian. We test for arboreality in non-avian theropods and early birds with comparisons to extant avian, mammalian, and reptilian scansors and climbers using a comprehensive set of morphological characters. Non-avian theropods, including the small, feathered deinonychosaurs, and Archaeopteryx, consistently and significantly cluster with fully terrestrial extant mammals and ground-based birds, such as ratites. Basal birds, more advanced than Archaeopteryx, cluster with extant perching ground-foraging birds. Evolutionary trends immediately prior to the origin of birds indicate skeletal adaptations opposite that expected for arboreal climbers. Results reject an arboreal capacity for the avian stem lineage, thus lending no support for the trees-down model. Support for a fully terrestrial ecology and origin of the avian flight stroke has broad implications for the origin of powered flight for this clade. A terrestrial origin for the avian flight stroke challenges the need for an intermediate gliding phase, presents the best resolved series of the evolution of vertebrate powered flight, and may differ fundamentally from the origin of bat and pterosaur flight, whose antecedents have been postulated to have been arboreal and gliding.
NASA Astrophysics Data System (ADS)
Mackay, Sean Leland
Antarctic debris-covered glaciers are potential archives of long-term climate change. However, the geomorphic response of these systems to climate forcing is not well understood. To address this concern, I conducted a series of field-based and numerical modeling studies in the McMurdo Dry Valleys of Antarctica (MDV), with a focus on Mullins and Friedman glaciers. I used data and results from geophysical surveys, ice-core collection and analysis, geomorphic mapping, micro-meteorological stations, and numerical-process models to (1) determine the precise origin and distribution of englacial and supraglacial debris within these buried-ice systems, (2) quantify the fundamental processes and feedbacks that govern interactions among englacial and supraglacial debris, (3) establish a process-based model to quantify the inventory of cosmogenic nuclides within englacial and supraglacial debris, and (4) isolate the governing relationships between the evolution of englacial /supraglacial debris and regional climate forcing. Results from 93 field excavations, 21 ice cores, and 24 km of ground-penetrating radar data show that Mullins and Friedman glaciers contain vast areas of clean glacier ice interspersed with inclined layers of concentrated debris. The similarity in the pattern of englacial debris bands across both glaciers, along with model results that call for negligible basal entrainment, is best explained by episodic environmental change at valley headwalls. To constrain better the timing of debris-band formation, I developed a modeling framework that tracks the accumulation of cosmogenic 3He in englacial and supraglacial debris. Results imply that ice within Mullins Glacier increases in age non-linearly from 12 ka to ˜220 ka in areas of active flow (up to >> 1.6 Ma in areas of slow-moving-to-stagnant ice) and that englacial debris bands originate with a periodicity of ˜41 ka. Modeling studies suggest that debris bands originate in synchronicity with changes in obliquity-paced, total integrated summer insolation. The implication is that the englacial structure and surface morphology of some cold-based, debris-covered glaciers can preserve high-resolution climate archives that exceed the typical resolution of Antarctic terrestrial deposits and moraine records.
Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.
Yang, Delian; Wang, Qiang
2015-09-28
We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Rautman, Christopher Arthur
2004-02-01
The geologic model implicit in the original site characterization report for the Bayou Choctaw Strategic Petroleum Reserve Site near Baton Rouge, Louisiana, has been converted to a numerical, computer-based three-dimensional model. The original site characterization model was successfully converted with minimal modifications and use of new information. The geometries of the salt diapir, selected adjacent sedimentary horizons, and a number of faults have been modeled. Models of a partial set of the several storage caverns that have been solution-mined within the salt mass are also included. Collectively, the converted model appears to be a relatively realistic representation of the geologymore » of the Bayou Choctaw site as known from existing data. A small number of geometric inconsistencies and other problems inherent in 2-D vs. 3-D modeling have been noted. Most of the major inconsistencies involve faults inferred from drill hole data only. Modem computer software allows visualization of the resulting site model and its component submodels with a degree of detail and flexibility that was not possible with conventional, two-dimensional and paper-based geologic maps and cross sections. The enhanced visualizations may be of particular value in conveying geologic concepts involved in the Bayou Choctaw Strategic Petroleum Reserve site to a lay audience. A Microsoft WindowsTM PC-based viewer and user-manipulable model files illustrating selected features of the converted model are included in this report.« less
Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project
Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian
2012-01-01
Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.
International migration beyond gravity: A statistical model for use in population projections
Cohen, Joel E.; Roig, Marta; Reuman, Daniel C.; GoGwilt, Cai
2008-01-01
International migration will play an increasing role in the demographic future of most nations if fertility continues to decline globally. We developed an algorithm to project future numbers of international migrants from any country or region to any other. The proposed generalized linear model (GLM) used geographic and demographic independent variables only (the population and area of origins and destinations of migrants, the distance between origin and destination, the calendar year, and indicator variables to quantify nonrandom characteristics of individual countries). The dependent variable, yearly numbers of migrants, was quantified by 43653 reports from 11 countries of migration from 228 origins and to 195 destinations during 1960–2004. The final GLM based on all data was selected by the Bayesian information criterion. The number of migrants per year from origin to destination was proportional to (population of origin)0.86(area of origin)−0.21(population of destination)0.36(distance)−0.97, multiplied by functions of year and country-specific indicator variables. The number of emigrants from an origin depended on both its population and its population density. For a variable initial year and a fixed terminal year 2004, the parameter estimates appeared stable. Multiple R2, the fraction of variation in log numbers of migrants accounted for by the starting model, improved gradually with recentness of the data: R2 = 0.57 for data from 1960 to 2004, R2 = 0.59 for 1985–2004, R2 = 0.61 for 1995–2004, and R2 = 0.64 for 2000–2004. The migration estimates generated by the model may be embedded in deterministic or stochastic population projections. PMID:18824693
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Emond, Claude; Ruiz, Patricia; Mumtaz, Moiz
2017-01-15
Chlorinated dibenzo-p-dioxins (CDDs) are a series of mono- to octa-chlorinated homologous chemicals commonly referred to as polychlorinated dioxins. One of the most potent, well-known, and persistent member of this family is 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). As part of translational research to make computerized models accessible to health risk assessors, we present a Berkeley Madonna recoded version of the human physiologically based pharmacokinetic (PBPK) model used by the U.S. Environmental Protection Agency (EPA) in the recent dioxin assessment. This model incorporates CYP1A2 induction, which is an important metabolic vector that drives dioxin distribution in the human body, and it uses a variable elimination half-life that is body burden dependent. To evaluate the model accuracy, the recoded model predictions were compared with those of the original published model. The simulations performed with the recoded model matched well with those of the original model. The recoded model was then applied to available data sets of real life exposure studies. The recoded model can describe acute and chronic exposures and can be useful for interpreting human biomonitoring data as part of an overall dioxin and/or dioxin-like compounds risk assessment. Copyright © 2016. Published by Elsevier Inc.
Chang, Shih-Hua
2018-04-01
The purpose of this study was to test a model of codependency based on Bowen's concept of differentiation for college students in Taiwan. The relations between family-of-origin dysfunction, differentiation of self, codependency traits and related symptoms including low self-esteem, relationship distress and psychological adjustment problems were examined. Data were collected from 567 college students from 2 large, urban universities in northern Taiwan. Results indicated a significantly negative relationship between levels of codependency and self-differentiation and that self-differentiation partially mediated the relationship between family-of-origin dysfunction and codependency. The implications of these findings for counselling Taiwanese college students who experience codependency traits and related symptoms as well as suggestions for future research are discussed. © 2016 International Union of Psychological Science.
NASA Astrophysics Data System (ADS)
Lyu, Dandan; Li, Shaofan
2017-10-01
Crystal defects have microstructure, and this microstructure should be related to the microstructure of the original crystal. Hence each type of crystals may have similar defects due to the same failure mechanism originated from the same microstructure, if they are under the same loading conditions. In this work, we propose a multiscale crystal defect dynamics (MCDD) model that models defects by considering its intrinsic microstructure derived from the microstructure or material genome of the original perfect crystal. The main novelties of present work are: (1) the discrete exterior calculus and algebraic topology theory are used to construct a scale-up (coarse-grained) dual lattice model for crystal defects, which may represent all possible defect modes inside a crystal; (2) a higher order Cauchy-Born rule (up to the fourth order) is adopted to construct atomistic-informed constitutive relations for various defect process zones, and (3) an hierarchical strain gradient theory based finite element formulation is developed to support an hierarchical multiscale cohesive (process) zone model for various defects in a unified formulation. The efficiency of MCDD computational algorithm allows us to simulate dynamic defect evolution at large scale while taking into account atomistic interaction. The MCDD model has been validated by comparing of the results of MCDD simulations with that of molecular dynamics (MD) in the cases of nanoindentation and uniaxial tension. Numerical simulations have shown that MCDD model can predict dislocation nucleation induced instability and inelastic deformation, and thus it may provide an alternative solution to study crystal plasticity.
Computational Systems Biology in Cancer: Modeling Methods and Applications
Materi, Wayne; Wishart, David S.
2007-01-01
In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081
Origin of the moon - The collision hypothesis
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.
Vandervert, Larry
2017-01-01
Mathematicians and scientists have struggled to adequately describe the ultimate foundations of mathematics. Nobel laureates Albert Einstein and Eugene Wigner were perplexed by this issue, with Wigner concluding that the workability of mathematics in the real world is a mystery we cannot explain. In response to this classic enigma, the major purpose of this article is to provide a theoretical model of the ultimate origin of mathematics and "number sense" (as defined by S. Dehaene) that is proposed to involve the learning of inverse dynamics models through the collaboration of the cerebellum and the cerebral cortex (but prominently cerebellum-driven). This model is based upon (1) the modern definition of mathematics as the "science of patterns," (2) cerebellar sequence (pattern) detection, and (3) findings that the manipulation of numbers is automated in the cerebellum. This cerebro-cerebellar approach does not necessarily conflict with mathematics or number sense models that focus on brain functions associated with especially the intraparietal sulcus region of the cerebral cortex. A direct corollary purpose of this article is to offer a cerebellar inner speech explanation for difficulty in developing "number sense" in developmental dyscalculia. It is argued that during infancy the cerebellum learns (1) a first tier of internal models for a primitive physics that constitutes the foundations of visual-spatial working memory, and (2) a second (and more abstract) tier of internal models based on (1) that learns "number" and relationships among dimensions across the primitive physics of the first tier. Within this context it is further argued that difficulty in the early development of the second tier of abstraction (and "number sense") is based on the more demanding attentional requirements imposed on cerebellar inner speech executive control during the learning of cerebellar inverse dynamics models. Finally, it is argued that finger counting improves (does not originate) "number sense" by extending focus of attention in executive control of silent cerebellar inner speech. It is suggested that (1) the origin of mathematics has historically been an enigma only because it is learned below the level of conscious awareness in cerebellar internal models, (2) understandings of the development of "number sense" and developmental dyscalculia can be advanced by first understanding the ultimate foundations of number and mathematics do not simply originate in the cerebral cortex, but rather in cerebro-cerebellar collaboration (predominately driven by the cerebellum). It is concluded that difficulty with "number sense" results from the extended demands on executive control in learning inverse dynamics models associated with cerebellar inner speech related to the second tier of abstraction (numbers) of the infant's primitive physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
Toward Improved Fidelity of Thermal Explosion Simulations
NASA Astrophysics Data System (ADS)
Nichols, A. L.; Becker, R.; Howard, W. M.; Wemhoff, A.
2009-12-01
We will present results of an effort to improve the thermal/chemical/mechanical modeling of HMX based explosives like LX04 and LX10 for thermal cook-off The original HMX model and analysis scheme were developed by Yoh et al. for use in the ALE3D modeling framework. The current results were built to remedy the deficiencies of that original model. We concentrated our efforts in four areas. The first area was addition of porosity to the chemical material model framework in ALE3D that is used to model the HMX explosive formulation. This is needed to handle the roughly 2% porosity in solid explosives. The second area was the improvement of the HMX reaction network, which included a reactive phase change model base on work by Henson et al. The third area required adding early decomposition gas species to the CHEETAH material database to develop more accurate equations of state for gaseous intermediates and products. Finally, it was necessary to improve the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cook-off The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.
A modified Galam’s model for word-of-mouth information exchange
NASA Astrophysics Data System (ADS)
Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria
2009-09-01
In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.
A Harris-Todaro Agent-Based Model to Rural-Urban Migration
NASA Astrophysics Data System (ADS)
Espíndola, Aquino L.; Silveira, Jaylson J.; Penna, T. J. P.
2006-09-01
The Harris-Todaro model of the rural-urban migration process is revisited under an agent-based approach. The migration of the workers is interpreted as a process of social learning by imitation, formalized by a computational model. By simulating this model, we observe a transitional dynamics with continuous growth of the urban fraction of overall population toward an equilibrium. Such an equilibrium is characterized by stabilization of rural-urban expected wages differential (generalized Harris-Todaro equilibrium condition), urban concentration and urban unemployment. These classic results obtained originally by Harris and Todaro are emergent properties of our model.
Efficient FFT Algorithm for Psychoacoustic Model of the MPEG-4 AAC
NASA Astrophysics Data System (ADS)
Lee, Jae-Seong; Lee, Chang-Joon; Park, Young-Cheol; Youn, Dae-Hee
This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.
Nallikuzhy, Jiss J; Dandapat, S
2017-06-01
In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Finch, Kristen; Espinoza, Edgard; Jones, F. Andrew; Cronn, Richard
2017-01-01
Premise of the study: We investigated whether wood metabolite profiles from direct analysis in real time (time-of-flight) mass spectrometry (DART-TOFMS) could be used to determine the geographic origin of Douglas-fir wood cores originating from two regions in western Oregon, USA. Methods: Three annual ring mass spectra were obtained from 188 adult Douglas-fir trees, and these were analyzed using random forest models to determine whether samples could be classified to geographic origin, growth year, or growth year and geographic origin. Specific wood molecules that contributed to geographic discrimination were identified. Results: Douglas-fir mass spectra could be differentiated into two geographic classes with an accuracy between 70% and 76%. Classification models could not accurately classify sample mass spectra based on growth year. Thirty-two molecules were identified as key for classifying western Oregon Douglas-fir wood cores to geographic origin. Discussion: DART-TOFMS is capable of detecting minute but regionally informative differences in wood molecules over a small geographic scale, and these differences made it possible to predict the geographic origin of Douglas-fir wood with moderate accuracy. Studies involving DART-TOFMS, alone and in combination with other technologies, will be relevant for identifying the geographic origin of illegally harvested wood. PMID:28529831
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Maranhão, Mara Fernandes; Estella, Nara Mendes; Cogo-Moreira, Hugo; Schmidt, Ulrike; Campbell, Iain C; Claudino, Angélica Medeiros
2018-01-01
"Craving" is a motivational state that promotes an intense desire related to consummatory behaviors. Despite growing interest in the concept of food craving, there is a lack of available instruments to assess it in Brazilian Portuguese. The objectives were to translate and adapt the Trait and the State Food Craving Questionnaire (FCQ-T and FCQ-S) to Brazilian Portuguese and to evaluate the psychometric properties of these versions.The FCQ-T and FCQ-S were translated and adapted to Brazilian Portuguese and administered to students at the Federal University of São Paulo. Both questionnaires in their original models were examined considering different estimators (frequentist and bayesian). The goodness of fit underlying the items from both scales was assessed through the following fit indices: χ2, WRMR residual, comparative fit index, Tucker-Lewis index and RMSEA. Data from 314 participants were included in the analyses. Poor fit indices were obtained for both of the original questionnaires regardless of the estimator used and original structural model. Thus, three eating disorder experts reviewed the content of the instruments and selected the items which were considered to assess the core aspects of the craving construct. The new and reduced models (questionnaires) generated good fit indices. Our abbreviated versions of FCQ-S and FCQ-T considerably diverge from the conceptual framework of the original questionnaires. Based on the results of this study, we propose a possible alternative, i.e., to assess craving for food as a unidimensional construct.
Oxidative peptide /and amide/ formation from Schiff base complexes
NASA Technical Reports Server (NTRS)
Strehler, B. L.; Li, M. P.; Martin, K.; Fliss, H.; Schmid, P.
1982-01-01
One hypothesis of the origin of pre-modern forms of life is that the original replicating molecules were specific polypeptides which acted as templates for the assembly of poly-Schiff bases complementary to the template, and that these polymers were then oxidized to peptide linkages, probably by photo-produced oxidants. A double cycle of such anti-parallel complementary replication would yield the original peptide polymer. If this model were valid, the Schiff base between an N-acyl alpha mino aldehyde and an amino acid should yield a dipeptide in aqueous solution in the presence of an appropriate oxidant. In the present study it is shown that the substituted dipeptide, N-acetyl-tyrosyl-tyrosine, is produced in high yield in aqueous solution at pH 9 through the action of H2O2 on the Schiff-base complex between N-acetyl-tyrosinal and tyrosine and that a great variety of N-acyl amino acids are formed from amino acids and aliphatic aldehydes under similar conditions.
Kopyt, Paweł; Celuch, Małgorzata
2007-01-01
A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.
Population-based human exposure models predict the distribution of personal exposures to pollutants of outdoor origin using a variety of inputs, including air pollution concentrations; human activity patterns, such as the amount of time spent outdoors versus indoors, commuting, w...
Reformulating the Depression Model of Learned Hopelessness for Academic Outcomes
ERIC Educational Resources Information Center
Au, Raymond C. P.; Watkins, David; Hattie, John; Alexander, Patricia
2009-01-01
This review explores developments in the construct of learned hopelessness, which originated in the clinical literature dealing with depression. In that context, the model developed by Abramson, Metalsky, and Alloy [Abramson, L. Y., Metalsky, G. I., & Alloy, L. B. (1989). "Hopelessness depression: A theory-based subtype of depression."…
[Neither Descartes nor Freud? current pain models in psychosomatic medicine].
Egloff, N; Egle, U T; von Känel, R
2008-05-14
Models explaining chronic pain based on the mere presence or absence of peripheral somatic findings or which view pain of psychological origin when there is no somatic explanation, have their shortcomings. Current scientific knowledge calls for distinct pain concepts, which integrate neurobiological and neuropsychological aspects of pain processing.
Ecophysiological parameters for Pacific Northwest trees.
Amy E. Hessl; Cristina Milesi; Michael A. White; David L. Peterson; Robert E. Keane
2004-01-01
We developed a species- and location-specific database of published ecophysiological variables typically used as input parameters for biogeochemical models of coniferous and deciduous forested ecosystems in the Western United States. Parameters are based on the requirements of Biome-BGC, a widely used biogeochemical model that was originally parameterized for the...
Prediction of Radial Vibration in Switched Reluctance Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, CJ; Fahimi, B
2013-12-01
Origins of vibration in switched reluctance machines (SRMs) are investigated. Accordingly, an input-output model based on the mechanical impulse response of the SRMis developed. The proposed model is derived using an experimental approach. Using the proposed approach, vibration of the stator frame is captured and experimentally verified.
NASA Astrophysics Data System (ADS)
Wang, Jun; Yue, Yun; Wang, Yi; Ichoku, Charles; Ellison, Luke; Zeng, Jing
2018-01-01
Largely used in several independent estimates of fire emissions, fire products based on MODIS sensors aboard the Terra and Aqua polar-orbiting satellites have a number of inherent limitations, including (a) inability to detect fires below clouds, (b) significant decrease of detection sensitivity at the edge of scan where pixel sizes are much larger than at nadir, and (c) gaps between adjacent swaths in tropical regions. To remedy these limitations, an empirical method is developed here and applied to correct fire emission estimates based on MODIS pixel level fire radiative power measurements and emission coefficients from the Fire Energetics and Emissions Research (FEER) biomass burning emission inventory. The analysis was performed for January 2010 over the northern sub-Saharan African region. Simulations from WRF-Chem model using original and adjusted emissions are compared with the aerosol optical depth (AOD) products from MODIS and AERONET as well as aerosol vertical profile from CALIOP data. The comparison confirmed an 30-50% improvement in the model simulation performance (in terms of correlation, bias, and spatial pattern of AOD with respect to observations) by the adjusted emissions that not only increases the original emission amount by a factor of two but also results in the spatially continuous estimates of instantaneous fire emissions at daily time scales. Such improvement cannot be achieved by simply scaling the original emission across the study domain. Even with this improvement, a factor of two underestimations still exists in the modeled AOD, which is within the current global fire emissions uncertainty envelope.
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Correia, Gilson; de Brito, Gustavo P.; de Almeida, José; Herdy, Wallace
2017-03-01
Simple prescriptions for computing the D-dimensional classical potential related to electromagnetic and gravitational models, based on the functional generator, are built out. These recipes are employed afterward as a support for probing the premise that renormalizable higher-order systems have a finite classical potential at the origin. It is also shown that the opposite of the conjecture above is not true. In other words, if a higher-order model is renormalizable, it is necessarily endowed with a finite classical potential at the origin, but the reverse of this statement is untrue. The systems used to check the conjecture were D-dimensional fourth-order Lee-Wick electrodynamics, and the D-dimensional fourth- and sixth-order gravity models. A special attention is devoted to New Massive Gravity (NMG) since it was the analysis of this model that inspired our surmise. In particular, we made use of our premise to resolve trivially the issue of the renormalizability of NMG, which was initially considered to be renormalizable, but it was shown some years later to be non-renormalizable. We remark that our analysis is restricted to local models in which the propagator has simple and real poles.
On theoretical and experimental modeling of metabolism forming in prebiotic systems
NASA Astrophysics Data System (ADS)
Bartsev, S. I.; Mezhevikin, V. V.
Recently searching for extraterrestrial life attracts more and more attention However the searching hardly can be effective without sufficiently universal concept of life origin which incidentally tackles a problem of origin of life on the Earth A concept of initial stages of life origin including origin of prebiotic metabolism is stated in the paper Suggested concept eliminates key difficulties in the problem of life origin and allows experimental verification of it According to the concept the predecessor of living beings has to be sufficiently simple to provide non-zero probability of self-assembling during short in geological or cosmic scale time In addition the predecessor has to be capable of autocatalysis and further complication evolution A possible scenario of initial stage of life origin which can be realized both on other planets and inside experimental facility is considered In the scope of the scenario a theoretical model of multivariate oligomeric autocatalyst coupled with phase-separated particle is presented Results of computer simulation of possible initial stage of chemical evolution are shown Conducted estimations show the origin of autocatalytic oligomeric phase-separated system is possible at reasonable values of kinetic parameters of involved chemical reactions in a small-scale flow reactor Accepted statements allowing to eliminate key problems of life origin imply important consequence -- organisms emerged out of the Earth or inside a reactor have to be based on another different from terrestrial biochemical
NASA Astrophysics Data System (ADS)
Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.
2001-05-01
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
Probabilistic prediction of barrier-island response to hurricanes
Plant, Nathaniel G.; Stockdon, Hilary F.
2012-01-01
Prediction of barrier-island response to hurricane attack is important for assessing the vulnerability of communities, infrastructure, habitat, and recreational assets to the impacts of storm surge, waves, and erosion. We have demonstrated that a conceptual model intended to make qualitative predictions of the type of beach response to storms (e.g., beach erosion, dune erosion, dune overwash, inundation) can be reformulated in a Bayesian network to make quantitative predictions of the morphologic response. In an application of this approach at Santa Rosa Island, FL, predicted dune-crest elevation changes in response to Hurricane Ivan explained about 20% to 30% of the observed variance. An extended Bayesian network based on the original conceptual model, which included dune elevations, storm surge, and swash, but with the addition of beach and dune widths as input variables, showed improved skill compared to the original model, explaining 70% of dune elevation change variance and about 60% of dune and shoreline position change variance. This probabilistic approach accurately represented prediction uncertainty (measured with the log likelihood ratio), and it outperformed the baseline prediction (i.e., the prior distribution based on the observations). Finally, sensitivity studies demonstrated that degrading the resolution of the Bayesian network or removing data from the calibration process reduced the skill of the predictions by 30% to 40%. The reduction in skill did not change conclusions regarding the relative importance of the input variables, and the extended model's skill always outperformed the original model.
A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.
Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe
2014-01-01
Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
A Model-Based Expert System for Space Power Distribution Diagnostics
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Schlegelmilch, Richard F.
1994-01-01
When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.
Nijhuis, Rogier L; Stijnen, Theo; Peeters, Anna; Witteman, Jacqueline C M; Hofman, Albert; Hunink, M G Myriam
2006-01-01
To determine the apparent and internal validity of the Rotterdam Ischemic heart disease & Stroke Computer (RISC) model, a Monte Carlo-Markov model, designed to evaluate the impact of cardiovascular disease (CVD) risk factors and their modification on life expectancy (LE) and cardiovascular disease-free LE (DFLE) in a general population (hereinafter, these will be referred to together as (DF)LE). The model is based on data from the Rotterdam Study, a cohort follow-up study of 6871 subjects aged 55 years and older who visited the research center for risk factor assessment at baseline (1990-1993) and completed a follow-up visit 7 years later (original cohort). The transition probabilities and risk factor trends used in the RISC model were based on data from 3501 subjects (the study cohort). To validate the RISC model, the number of simulated CVD events during 7 years' follow-up were compared with the observed number of events in the study cohort and the original cohort, respectively, and simulated (DF)LEs were compared with the (DF)LEs calculated from multistate life tables. Both in the study cohort and in the original cohort, the simulated distribution of CVD events was consistent with the observed number of events (CVD deaths: 7.1% v. 6.6% and 7.4% v. 7.6%, respectively; non-CVD deaths: 11.2% v. 11.5% and 12.9% v. 13.0%, respectively). The distribution of (DF)LEs estimated with the RISC model consistently encompassed the (DF)LEs calculated with multistate life tables. The simulated events and (DF)LE estimates from the RISC model are consistent with observed data from a cohort follow-up study.
A likelihood ratio model for the determination of the geographical origin of olive oil.
Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz
2015-01-01
Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.
Fuzzy model-based servo and model following control for nonlinear systems.
Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O
2009-12-01
This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.
Oparin's coacervates as an important milestone in chemical evolution
NASA Astrophysics Data System (ADS)
Kolb, Vera M.
2015-09-01
Although Oparin's coacervate model for the origin of life by chemical evolution is almost 100 years old, it is still valid. However, the structure of his originally proposed coacervate is not considered prebiotic, based on some recent developments in prebiotic chemistry. We have remedied this deficiency of the Oparin's model, by substituting his coacervate with a prebiotically feasible one. Oparin's coacervates are aqueous structures, but have a boundary with the rest of the aqueous medium. They exhibit properties of self-replication, and provide a path to a primitive metabolism, via chemical competition and thus a primitive selection. Thus, coacervates are good models for proto-cells. We review here some salient points of Oparin's model and address also some philosophical views on the beginning of natural selection in primitive chemical systems.
A New Freshwater Biodiversity Indicator Based on Fish Community Assemblages
Clavel, Joanne; Poulet, Nicolas; Porcher, Emmanuelle; Blanchet, Simon; Grenouillet, Gaël; Pavoine, Sandrine; Biton, Anne; Seon-Massin, Nirmala; Argillier, Christine; Daufresne, Martin; Teillac-Deschamps, Pauline; Julliard, Romain
2013-01-01
Biodiversity has reached a critical state. In this context, stakeholders need indicators that both provide a synthetic view of the state of biodiversity and can be used as communication tools. Using river fishes as model, we developed community indicators that aim at integrating various components of biodiversity including interactions between species and ultimately the processes influencing ecosystem functions. We developed indices at the species level based on (i) the concept of specialization directly linked to the niche theory and (ii) the concept of originality measuring the overall degree of differences between a species and all other species in the same clade. Five major types of originality indices, based on phylogeny, habitat-linked and diet-linked morphology, life history traits, and ecological niche were analyzed. In a second step, we tested the relationship between all biodiversity indices and land use as a proxy of human pressures. Fish communities showed no significant temporal trend for most of these indices, but both originality indices based on diet- and habitat- linked morphology showed a significant increase through time. From a spatial point of view, all indices clearly singled out Corsica Island as having higher average originality and specialization. Finally, we observed that the originality index based on niche traits might be used as an informative biodiversity indicator because we showed it is sensitive to different land use classes along a landscape artificialization gradient. Moreover, its response remained unchanged over two other land use classifications at the global scale and also at the regional scale. PMID:24278356
Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.
Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui
2017-01-01
To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.
Darwin's diagram of divergence of taxa as a causal model for the origin of species.
Bouzat, Juan L
2014-03-01
On the basis that Darwin's theory of evolution encompasses two logically independent processes (common descent and natural selection), the only figure in On the Origin of Species (the Diagram of Divergence of Taxa) is often interpreted as illustrative of only one of these processes: the branching patterns representing common ancestry. Here, I argue that Darwin's Diagram of Divergence of Taxa represents a broad conceptual model of Darwin's theory, illustrating the causal efficacy of natural selection in producing well-defined varieties and ultimately species. The Tree Diagram encompasses the idea that natural selection explains common descent and the origin of organic diversity, thus representing a comprehensive model of Darwin's theory on the origin of species. I describe Darwin's Tree Diagram in relation to his argumentative strategy under the vera causa principle, and suggest that the testing of his theory based on the evidence from the geological record, the geographical distribution of organisms, and the mutual affinities of organic beings can be framed under the hypothetico-deductive method. Darwin's Diagram of Divergence of Taxa therefore represents a broad conceptual model that helps understanding the causal construction of Darwin's theory of evolution, the structure of his argumentative strategy, and the nature of his scientific methodology.
Improvement of Mars Surface Snow Albedo Modeling in LMD Mars GCM With SNICAR
NASA Astrophysics Data System (ADS)
Singh, D.; Flanner, M. G.; Millour, E.
2018-03-01
The current version of Laboratoire de Météorologie Dynamique (LMD) Mars GCM (original-MGCM) uses annually repeating (prescribed) CO2 snow albedo values based on the Thermal Emission Spectrometer observations. We integrate the Snow, Ice, and Aerosol Radiation (SNICAR) model with MGCM (SNICAR-MGCM) to prognostically determine H2O and CO2 snow albedos interactively in the model. Using the new diagnostic capabilities of this model, we find that cryospheric surfaces (with dust) increase the global surface albedo of Mars by 0.022. Over snow-covered regions, SNICAR-MGCM simulates mean albedo that is higher by about 0.034 than prescribed values in the original-MGCM. Globally, shortwave flux into the surface decreases by 1.26 W/m2, and net CO2 snow deposition increases by about 4% with SNICAR-MGCM over one Martian annual cycle as compared to the original-MGCM simulations. SNICAR integration reduces the mean global surface temperature and the surface pressure of Mars by about 0.87% and 2.5%, respectively. Changes in albedo also show a similar distribution to dust deposition over the globe. The SNICAR-MGCM model generates albedos with higher sensitivity to surface dust content as compared to original-MGCM. For snow-covered regions, we improve the correlation between albedo and optical depth of dust from -0.91 to -0.97 with SNICAR-MGCM as compared to the original-MGCM. Dust substantially darkens Mars's cryosphere, thereby reducing its impact on the global shortwave energy budget by more than half, relative to the impact of pure snow.
Research on Nonlinear Time Series Forecasting of Time-Delay NN Embedded with Bayesian Regularization
NASA Astrophysics Data System (ADS)
Jiang, Weijin; Xu, Yusheng; Xu, Yuhui; Wang, Jianmin
Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization. Furthermore, the model is applied to forecast the imp&exp trades in one industry. The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business. Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system. While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably 'catch' the dynamic characteristic of the nonlinear system which produced the origin serial.
An intermittency model for predicting roughness induced transition
NASA Astrophysics Data System (ADS)
Ge, Xuan; Durbin, Paul
2014-11-01
An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
2003-01-01
Our research objective is to understand and model the chemical processes on the primitive Earth that generated the first autocatalytic molecules and microstructures involved in the origin of life. Our approach involves: (a) investigation of a model origin-of-life process named the Sugar Model that is based on the reaction of formaldehyde- derived sugars (trioses and tetroses) with ammonia, and (b) elucidation of the constraints imposed on the chemistry of the origin of life by the fixed energies and rates of C,H,O-organic reactions under mild aqueous conditions. Recently, we demonstrated that under mild aqueous conditions the Sugar Model process yields autocatalytic products, and generates organic micropherules (2-20 micron dia.) that exhibit budding, size uniformity, and chain formation. We also discovered that the sugar substrates of the Sugar Model are capable of reducing nitrite to ammonia under mild aqueous conditions. In addition studies done in collaboration with Sandra Pizzarrello (Arizona State University) revealed that chiral amino acids (including meteoritic isovaline) catalyze both the synthesis and specific handedness of chiral sugars. Our systematic survey of the energies and rates of reactions of C,H,O-organic substrates under mild aqueous conditions revealed several general principles (rules) that govern the direction and rate of organic reactions. These reactivity principles constrain the structure of chemical pathways used in the origin of life, and in modern and primitive metabolism.
40 CFR 1042.104 - Exhaust emission standards for Category 3 engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for other testing. (2) NOX standards apply based on the engine's model year and maximum in-use engine... Engines (g/kW-hr) Emission standards Model year Maximum in-use engine speed Less than130 RPM 130-2000RPM a... Tier 1 NOX standards apply as specified in 40 CFR part 94 for engines originally manufactured in model...
40 CFR 1042.104 - Exhaust emission standards for Category 3 engines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for other testing. (2) NOX standards apply based on the engine's model year and maximum in-use engine... Engines (g/kW-hr) Emission standards Model year Maximum in-use engine speed Less than130 RPM 130-2000RPM a... Tier 1 NOX standards apply as specified in 40 CFR part 94 for engines originally manufactured in model...
40 CFR 1042.104 - Exhaust emission standards for Category 3 engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for other testing. (2) NOX standards apply based on the engine's model year and maximum in-use engine... Engines (g/kW-hr) Emission standards Model year Maximum in-use engine speed Less than130 RPM 130-2000RPM a... standards apply as specified in 40 CFR part 94 for engines originally manufactured in model years 2004...
40 CFR 1042.104 - Exhaust emission standards for Category 3 engines.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for other testing. (2) NOX standards apply based on the engine's model year and maximum in-use engine... Engines (g/kW-hr) Emission standards Model year Maximum in-use engine speed Less than130 RPM 130-2000RPM a... standards apply as specified in 40 CFR part 94 for engines originally manufactured in model years 2004...
40 CFR 1042.104 - Exhaust emission standards for Category 3 engines.
Code of Federal Regulations, 2012 CFR
2012-07-01
... for other testing. (2) NOX standards apply based on the engine's model year and maximum in-use engine... Engines (g/kW-hr) Emission standards Model year Maximum in-use engine speed Less than130 RPM 130-2000RPM a... Tier 1 NOX standards apply as specified in 40 CFR part 94 for engines originally manufactured in model...
ERIC Educational Resources Information Center
Liu, Xun
2010-01-01
This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
Genetic coding and gene expression - new Quadruplet genetic coding model
NASA Astrophysics Data System (ADS)
Shankar Singh, Rama
2012-07-01
Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.
Peng, Xiang; King, Irwin
2008-01-01
The Biased Minimax Probability Machine (BMPM) constructs a classifier which deals with the imbalanced learning tasks. It provides a worst-case bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data samples, and achieves promising performance. In this paper, we develop a novel yet critical extension training algorithm for BMPM that is based on Second-Order Cone Programming (SOCP). Moreover, we apply the biased classification model to medical diagnosis problems to demonstrate its usefulness. By removing some crucial assumptions in the original solution to this model, we make the new method more accurate and robust. We outline the theoretical derivatives of the biased classification model, and reformulate it into an SOCP problem which could be efficiently solved with global optima guarantee. We evaluate our proposed SOCP-based BMPM (BMPMSOCP) scheme in comparison with traditional solutions on medical diagnosis tasks where the objectives are to focus on improving the sensitivity (the accuracy of the more important class, say "ill" samples) instead of the overall accuracy of the classification. Empirical results have shown that our method is more effective and robust to handle imbalanced classification problems than traditional classification approaches, and the original Fractional Programming-based BMPM (BMPMFP).
Dececchi, T. Alexander; Larsson, Hans C. E.
2011-01-01
The origin of avian flight is a classic macroevolutionary transition with research spanning over a century. Two competing models explaining this locomotory transition have been discussed for decades: ground up versus trees down. Although it is impossible to directly test either of these theories, it is possible to test one of the requirements for the trees-down model, that of an arboreal paravian. We test for arboreality in non-avian theropods and early birds with comparisons to extant avian, mammalian, and reptilian scansors and climbers using a comprehensive set of morphological characters. Non-avian theropods, including the small, feathered deinonychosaurs, and Archaeopteryx, consistently and significantly cluster with fully terrestrial extant mammals and ground-based birds, such as ratites. Basal birds, more advanced than Archaeopteryx, cluster with extant perching ground-foraging birds. Evolutionary trends immediately prior to the origin of birds indicate skeletal adaptations opposite that expected for arboreal climbers. Results reject an arboreal capacity for the avian stem lineage, thus lending no support for the trees-down model. Support for a fully terrestrial ecology and origin of the avian flight stroke has broad implications for the origin of powered flight for this clade. A terrestrial origin for the avian flight stroke challenges the need for an intermediate gliding phase, presents the best resolved series of the evolution of vertebrate powered flight, and may differ fundamentally from the origin of bat and pterosaur flight, whose antecedents have been postulated to have been arboreal and gliding. PMID:21857918
NASA Astrophysics Data System (ADS)
Korotenko, K.
2003-04-01
An ultra-high-resolution version of DieCAST was adjusted for the Adriatic Sea and coupled with an oil spill model. Hydrodynamic module was developed on base of th low dissipative, four-order-accuracy version DieCAST with the resolution of ~2km. The oil spill model was developed on base of particle tracking technique The effect of evaporation is modeled with an original method developed on the base of the pseudo-component approach. A special dialog interface of this hybrid system allowing direct coupling to meteorlogical data collection systems or/and meteorological models. Experiments with hypothetic oil spill are analyzed for the Northern Adriatic Sea. Results (animations) of mesoscale circulation and oil slick modeling are presented at wabsite http://thayer.dartmouth.edu/~cushman/adriatic/movies/
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
... Original August 12, 2010. A300-29-6064 Original August 12, 2010. A310-29-2099, including Appendix 01... (for Model A300- Original....... August 12, 2010. 600 airplanes). A310-29-2099 (for Model A310 Original..., 2010. A310-29-2099 Original........ August 12, 2010. A310-29-2100 Original........ August 12, 2010...
Economic communication model set
NASA Astrophysics Data System (ADS)
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Multi-Level Building Reconstruction for Automatic Enhancement of High Resolution Dsms
NASA Astrophysics Data System (ADS)
Arefi, H.; Reinartz, P.
2012-07-01
In this article a multi-level approach is proposed for reconstruction-based improvement of high resolution Digital Surface Models (DSMs). The concept of Levels of Detail (LOD) defined by CityGML standard has been considered as basis for abstraction levels of building roof structures. Here, the LOD1 and LOD2 which are related to prismatic and parametric roof shapes are reconstructed. Besides proposing a new approach for automatic LOD1 and LOD2 generation from high resolution DSMs, the algorithm contains two generalization levels namely horizontal and vertical. Both generalization levels are applied to prismatic model of buildings. The horizontal generalization allows controlling the approximation level of building footprints which is similar to cartographic generalization concept of the urban maps. In vertical generalization, the prismatic model is formed using an individual building height and continuous to included all flat structures locating in different height levels. The concept of LOD1 generation is based on approximation of the building footprints into rectangular or non-rectangular polygons. For a rectangular building containing one main orientation a method based on Minimum Bounding Rectangle (MBR) in employed. In contrast, a Combined Minimum Bounding Rectangle (CMBR) approach is proposed for regularization of non-rectilinear polygons, i.e. buildings without perpendicular edge directions. Both MBRand CMBR-based approaches are iteratively employed on building segments to reduce the original building footprints to a minimum number of nodes with maximum similarity to original shapes. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed for LOD2 generation. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines. The 3D model is derived for each building part and finally, a complete parametric model is formed by merging all the 3D models of the individual parts and adjusting the nodes after the merging step. In order to provide an enhanced DSM, a surface model is provided for each building by interpolation of the internal points of the generated models. All interpolated models are situated on a Digital Terrain Model (DTM) of corresponding area to shape the enhanced DSM. Proposed DSM enhancement approach has been tested on a dataset from Munich central area. The original DSM is created using robust stereo matching of Worldview-2 stereo images. A quantitative assessment of the new DSM by comparing the heights of the ridges and eaves shows a standard deviation of better than 50cm.
Origin of asteroids and the missing planet
NASA Technical Reports Server (NTRS)
Opik, E. J.
1977-01-01
Consideration is given to Ovenden's (1972) theory concerning the existence of a planet of 90 earth masses which existed from the beginning of the solar system and then disappeared 16 million years ago, leaving only asteroids. His model for secular perturbations is reviewed along with the principle of least interaction action (1972, 1973, 1975) on which the model is based. It is suggested that the structure of the asteroid belt and the origin of meteorites are associated with the vanished planet. A figure of 0.001 earth masses is proposed as a close estimate of the mass of the asteroidal belt. The hypothesis that the planet was removed through an explosion is discussed, noting the possible origin of asteroids in such a manner. Various effects of the explosion are postulated, including the direct impact of fragments on the earth, their impact on the sun and its decreased radiation, and the direct radiation of the explosion. A model for the disappearance of the planet by ejection in a gravitational encounter with a passing mass is also described.
2014-01-01
Background Identifying human and malaria parasite movements is important for control planning across all transmission intensities. Imported infections can reintroduce infections into areas previously free of infection, maintain ‘hotspots’ of transmission and import drug resistant strains, challenging national control programmes at a variety of temporal and spatial scales. Recent analyses based on mobile phone usage data have provided valuable insights into population and likely parasite movements within countries, but these data are restricted to sub-national analyses, leaving important cross-border movements neglected. Methods National census data were used to analyse and model cross-border migration and movement, using East Africa as an example. ‘Hotspots’ of origin-specific immigrants from neighbouring countries were identified for Kenya, Tanzania and Uganda. Populations of origin-specific migrants were compared to distance from origin country borders and population size at destination, and regression models were developed to quantify and compare differences in migration patterns. Migration data were then combined with existing spatially-referenced malaria data to compare the relative propensity for cross-border malaria movement in the region. Results The spatial patterns and processes for immigration were different between each origin and destination country pair. Hotspots of immigration, for example, were concentrated close to origin country borders for most immigrants to Tanzania, but for Kenya, a similar pattern was only seen for Tanzanian and Ugandan immigrants. Regression model fits also differed between specific migrant groups, with some migration patterns more dependent on population size at destination and distance travelled than others. With these differences between immigration patterns and processes, and heterogeneous transmission risk in East Africa and the surrounding region, propensities to import malaria infections also likely show substantial variations. Conclusion This was a first attempt to quantify and model cross-border movements relevant to malaria transmission and control. With national census available worldwide, this approach can be translated to construct a cross-border human and malaria movement evidence base for other malaria endemic countries. The outcomes of this study will feed into wider efforts to quantify and model human and malaria movements in endemic regions to facilitate improved intervention planning, resource allocation and collaborative policy decisions. PMID:24886389
The Mine Locomotive Wireless Network Strategy Based on Successive Interference Cancellation
Wu, Liaoyuan; Han, Jianghong; Wei, Xing; Shi, Lei; Ding, Xu
2015-01-01
We consider a wireless network strategy based on successive interference cancellation (SIC) for mine locomotives. We firstly build the original mathematical model for the strategy which is a non-convex model. Then, we examine this model intensively, and figure out that there are certain regulations embedded in it. Based on these findings, we are able to reformulate the model into a new form and design a simple algorithm which can assign each locomotive with a proper transmitting scheme during the whole schedule procedure. Simulation results show that the outcomes obtained through this algorithm are improved by around 50% compared with those that do not apply the SIC technique. PMID:26569240
Modeling Musical Context With Word2Vec
NASA Astrophysics Data System (ADS)
Herremans, Dorien; Chuan, Ching-Hua
2017-05-01
We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.
A user interface for the Kansas Geological Survey slug test model.
Esling, Steven P; Keller, John E
2009-01-01
The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results.
Representing Practice: Practice Models, Patterns, Bundles
ERIC Educational Resources Information Center
Falconer, Isobel; Finlay, Janet; Fincher, Sally
2011-01-01
This article critiques learning design as a representation for sharing and developing practice, based on synthesis of three projects. Starting with the findings of the Mod4L Models of Practice project, it argues that the technical origins of learning design, and the consequent focus on structure and sequence, limit its usefulness for sharing…
A Model for Administrative Evaluation by Subordinates.
ERIC Educational Resources Information Center
Budig, Jeanne E.
Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…
Understanding MOOC Students: Motivations and Behaviours Indicative of MOOC Completion
ERIC Educational Resources Information Center
Pursel, B. K.; Zhang, L.; Jablokow, K. W.; Choi, G. W.; Velegol, D.
2016-01-01
Massive open online courses (MOOCs) continue to appear across the higher education landscape, originating from many institutions in the USA and around the world. MOOCs typically have low completion rates, at least when compared with traditional courses, as this course delivery model is very different from traditional, fee-based models, such as…
A Computational Model of Learners Achievement Emotions Using Control-Value Theory
ERIC Educational Resources Information Center
Muñoz, Karla; Noguez, Julieta; Neri, Luis; Mc Kevitt, Paul; Lunney, Tom
2016-01-01
Game-based Learning (GBL) environments make instruction flexible and interactive. Positive experiences depend on personalization. Student modelling has focused on affect. Three methods are used: (1) recognizing the physiological effects of emotion, (2) reasoning about emotion from its origin and (3) an approach combining 1 and 2. These have proven…
Predicting Mercury's precession using simple relativistic Newtonian dynamics
NASA Astrophysics Data System (ADS)
Friedman, Y.; Steiner, J. M.
2016-03-01
We present a new simple relativistic model for planetary motion describing accurately the anomalous precession of the perihelion of Mercury and its origin. The model is based on transforming Newton's classical equation for planetary motion from absolute to real spacetime influenced by the gravitational potential and introducing the concept of influenced direction.
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
A Simplified Biosphere Model for Global Climate Studies.
NASA Astrophysics Data System (ADS)
Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.
1991-03-01
The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.
Lastra-Mejías, Miguel; Torreblanca-Zanca, Albertina; Aroca-Santos, Regina; Cancilla, John C; Izquierdo, Jesús G; Torrecilla, José S
2018-08-01
A set of 10 honeys comprising a diverse range of botanical origins have been successfully characterized through fluorescence spectroscopy using inexpensive light-emitting diodes (LEDs) as light sources. It has been proven that each LED-honey combination tested originates a unique emission spectrum, which enables the authentication of every honey, being able to correctly label it with its botanical origin. Furthermore, the analysis was backed up by a mathematical analysis based on partial least square models which led to a correct classification rate of each type of honey of over 95%. Finally, the same approach was followed to analyze rice syrup, which is a common honey adulterant that is challenging to identify when mixed with honey. A LED-dependent and unique fluorescence spectrum was found for the syrup, which presumably qualifies this approach for the design of uncomplicated, fast, and cost-effective quality control and adulteration assessing tools for different types of honey. Copyright © 2018 Elsevier B.V. All rights reserved.
The first eukaryote cell: an unfinished history of contestation.
O'Malley, Maureen A
2010-09-01
The eukaryote cell is one of the most radical innovations in the history of life, and the circumstances of its emergence are still deeply contested. This paper will outline the recent history of attempts to reveal these origins, with special attention to the argumentative strategies used to support claims about the first eukaryote cell. I will focus on two general models of eukaryogenesis: the phagotrophy model and the syntrophy model. As their labels indicate, they are based on claims about metabolic relationships. The first foregrounds the ability to consume other organisms; the second the ability to enter into symbiotic metabolic arrangements. More importantly, however, the first model argues for the autogenous or self-generated origins of the eukaryote cell, and the second for its exogenous or externally generated origins. Framing cell evolution this way leads each model to assert different priorities in regard to cell-biological versus molecular evidence, cellular versus environmental influences, plausibility versus evolutionary probability, and irreducibility versus the continuity of cell types. My examination of these issues will conclude with broader reflections on the implications of eukaryogenesis studies for a philosophical understanding of scientific contestation. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models
NASA Astrophysics Data System (ADS)
Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter
Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.
Bhattacharyya, Dhananjay; Halder, Sukanya; Basu, Sankar; Mukherjee, Debasish; Kumar, Prasun; Bansal, Manju
2017-02-01
Comprehensive analyses of structural features of non-canonical base pairs within a nucleic acid double helix are limited by the availability of a small number of three dimensional structures. Therefore, a procedure for model building of double helices containing any given nucleotide sequence and base pairing information, either canonical or non-canonical, is seriously needed. Here we describe a program RNAHelix, which is an updated version of our widely used software, NUCGEN. The program can regenerate duplexes using the dinucleotide step and base pair orientation parameters for a given double helical DNA or RNA sequence with defined Watson-Crick or non-Watson-Crick base pairs. The original structure and the corresponding regenerated structure of double helices were found to be very close, as indicated by the small RMSD values between positions of the corresponding atoms. Structures of several usual and unusual double helices have been regenerated and compared with their original structures in terms of base pair RMSD, torsion angles and electrostatic potentials and very high agreements have been noted. RNAHelix can also be used to generate a structure with a sequence completely different from an experimentally determined one or to introduce single to multiple mutation, but with the same set of parameters and hence can also be an important tool in homology modeling and study of mutation induced structural changes.
ERIC Educational Resources Information Center
Johnston, Keith; Conneely, Claire; Murchan, Damian; Tangney, Brendan
2015-01-01
Bridge21 is an innovative approach to learning for secondary education that was originally conceptualised as part of a social outreach intervention in the authors' third-level institution whereby participants attended workshops at a dedicated learning space on campus focusing on a particular model of technology-mediated group-based learning. This…
Evaluation of a lake whitefish bioenergetics model
Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.
2006-01-01
We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.
Model based manipulator control
NASA Technical Reports Server (NTRS)
Petrosky, Lyman J.; Oppenheim, Irving J.
1989-01-01
The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2017-07-01
Though Ogden et al. list several shortcomings of the original SCS-CN method, fit for purpose is a key consideration in hydrological modelling, as shown by the adoption of SCS-CN method in many design standards. The theoretical framework of Bartlett et al. [2016a] reveals a family of semidistributed models, of which the SCS-CN method is just one member. Other members include event-based versions of the Variable Infiltration Capacity (VIC) model and TOPMODEL. This general model allows us to move beyond the limitations of the original SCS-CN method under different rainfall-runoff mechanisms and distributions for soil and rainfall variability. Future research should link this general model approach to different hydrogeographic settings, in line with the call for action proposed by Ogden et al.
NASA Technical Reports Server (NTRS)
Abbas, Khaled A.; Fattah, Nabil Abdel; Reda, Hala R.
2003-01-01
This research is concerned with developing passenger demand models for international aviation from/to Egypt. In this context, aviation sector in Egypt is represented by the biggest and main airport namely Cairo airport as well as by the main Egyptian international air carrier namely Egyptair. The developed models utilize two variables to represent aviation demand, namely total number of international flights originating from and attracted to Cairo airport as well as total number of passengers using Egyptair international flights originating from and attracted to Cairo airport. Such demand variables were related, using different functional forms, to several explanatory variables including population, GDP and number of foreign tourists. Finally, two models were selected based on their logical acceptability, best fit and statistical significance. To demonstrate usefulness of developed models, these were used to forecast future demand patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y; Mazur, T; Green, O
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less
Yudthavorasit, Soparat; Wongravee, Kanet; Leepipatpiboon, Natchanun
2014-09-01
Chromatographic fingerprints of gingers from five different ginger-producing countries (China, India, Malaysia, Thailand and Vietnam) were newly established to discriminate the origin of ginger. The pungent bioactive principles of ginger, gingerols and six other gingerol-related compounds were determined and identified. Their variations in HPLC profiles create the characteristic pattern of each origin by employing similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and linear discriminant analysis (LDA). As results, the ginger profiles tended to be grouped and separated on the basis of the geographical closeness of the countries of origin. An effective mathematical model with high predictive ability was obtained and chemical markers for each origin were also identified as the characteristic active compounds to differentiate the ginger origin. The proposed method is useful for quality control of ginger in case of origin labelling and to assess food authenticity issues. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evolutionary Origins of Cancer Driver Genes and Implications for Cancer Prognosis
Chu, Xin-Yi; Zhou, Xiong-Hui; Cui, Ze-Jia; Zhang, Hong-Yu
2017-01-01
The cancer atavistic theory suggests that carcinogenesis is a reverse evolution process. It is thus of great interest to explore the evolutionary origins of cancer driver genes and the relevant mechanisms underlying the carcinogenesis. Moreover, the evolutionary features of cancer driver genes could be helpful in selecting cancer biomarkers from high-throughput data. In this study, through analyzing the cancer endogenous molecular networks, we revealed that the subnetwork originating from eukaryota could control the unlimited proliferation of cancer cells, and the subnetwork originating from eumetazoa could recapitulate the other hallmarks of cancer. In addition, investigations based on multiple datasets revealed that cancer driver genes were enriched in genes originating from eukaryota, opisthokonta, and eumetazoa. These results have important implications for enhancing the robustness of cancer prognosis models through selecting the gene signatures by the gene age information. PMID:28708071
Evolutionary Origins of Cancer Driver Genes and Implications for Cancer Prognosis.
Chu, Xin-Yi; Jiang, Ling-Han; Zhou, Xiong-Hui; Cui, Ze-Jia; Zhang, Hong-Yu
2017-07-14
The cancer atavistic theory suggests that carcinogenesis is a reverse evolution process. It is thus of great interest to explore the evolutionary origins of cancer driver genes and the relevant mechanisms underlying the carcinogenesis. Moreover, the evolutionary features of cancer driver genes could be helpful in selecting cancer biomarkers from high-throughput data. In this study, through analyzing the cancer endogenous molecular networks, we revealed that the subnetwork originating from eukaryota could control the unlimited proliferation of cancer cells, and the subnetwork originating from eumetazoa could recapitulate the other hallmarks of cancer. In addition, investigations based on multiple datasets revealed that cancer driver genes were enriched in genes originating from eukaryota, opisthokonta, and eumetazoa. These results have important implications for enhancing the robustness of cancer prognosis models through selecting the gene signatures by the gene age information.
Waters, Theodore E A; Ruiz, Sarah K; Roisman, Glenn I
2017-01-01
Increasing evidence suggests that attachment representations take at least two forms: a secure base script and an autobiographical narrative of childhood caregiving experiences. This study presents data from the first 26 years of the Minnesota Longitudinal Study of Risk and Adaptation (N = 169), examining the developmental origins of secure base script knowledge in a high-risk sample and testing alternative models of the developmental sequencing of the construction of attachment representations. Results demonstrated that secure base script knowledge was predicted by observations of maternal sensitivity across childhood and adolescence. Furthermore, findings suggest that the construction of a secure base script supports the development of a coherent autobiographical representation of childhood attachment experiences with primary caregivers by early adulthood. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Toward Improved Fidelity of Thermal Explosion Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, A L; Becker, R; Howard, W M
2009-07-17
We will present results of an effort to improve the thermal/chemical/mechanical modeling of HMX based explosive like LX04 and LX10 for thermal cook-off. The original HMX model and analysis scheme were developed by Yoh et.al. for use in the ALE3D modeling framework. The current results were built to remedy the deficiencies of that original model. We concentrated our efforts in four areas. The first area was addition of porosity to the chemical material model framework in ALE3D that is used to model the HMX explosive formulation. This is needed to handle the roughly 2% porosity in solid explosives. The secondmore » area was the improvement of the HMX reaction network, which included the inclusion of a reactive phase change model base on work by Henson et.al. The third area required adding early decomposition gas species to the CHEETAH material database to develop more accurate equations of state for gaseous intermediates and products. Finally, it was necessary to improve the implicit mechanics module in ALE3D to more naturally handle the long time scales associated with thermal cook-off. The application of the resulting framework to the analysis of the Scaled Thermal Explosion (STEX) experiments will be discussed.« less
Adopting and Teaching Evidence-Based Practice in Master's-Level Social Work Programs
ERIC Educational Resources Information Center
Drake, Brett; Hovmand, Peter; Jonson-Reid, Melissa; Zayas, Luis H.
2007-01-01
This article makes specific suggestions for teaching evidence-based practice (EBP) in the master's-in-social-work (MSW) curriculum. The authors use the model of EBP as it was originally conceived: a process for posing empirically answerable questions, finding and evaluating the best available evidence, and applying that evidence in conjunction…
"Walkabout: Looking In, Looking Out": A Mindfulness-Based Art Therapy Program
ERIC Educational Resources Information Center
Peterson, Caroline
2015-01-01
This brief report describes a mindfulness-based art therapy (MBAT) intervention, "Walkabout: Looking In, Looking Out," which was piloted in 2010 and has since been offered at the Abramson Cancer Center at Pennsylvania Hospital in Philadelphia. The author adapted the original MBAT intervention using a walkabout conceptual model, which was…
USDA-ARS?s Scientific Manuscript database
Adaptive waveform interpretation with Gaussian filtering (AWIGF) and second order bounded mean oscillation operator Z square 2(u,t,r) are TDR analysis methods based on second order differentiation. AWIGF was originally designed for relatively long probe (greater than 150 mm) TDR waveforms, while Z s...
Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo
2015-05-01
Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Parada, Carolina; Colas, Francois; Soto-Mendoza, Samuel; Castro, Leonardo
2012-01-01
An individual-based model (IBM) of anchoveta ( Engraulis ringens) larvae was coupled to a climatological hydrodynamic (Regional Oceanic Modeling System, ROMS) model for central-southern Chile to answer the question as to whether or not across- and alongshore transport off central-southern Chile enhances retention in the spawning areas during the winter and summer reproductive periods, using model-based pre-recruitment indices (simulated transport success to nursery areas). The hydrodynamic model validation showed that ROMS captures the mean Seas Surface Temperature and Eddie Kinetic Energy observed in satellite-based data over the entire region. The IBM was used to simulate the transport of eggs and larvae from spawning zones in central Chile (Constitución, Dichato, Gulf of Arauco and Lebu-Corral) to historical nursery areas (HRZ, region between 35°S and 37°S). Model results corroborated HRZ as the most successful pre-recruitment zone (particles originated in the Dichato and Gulf of Arauco spawning areas), as well as identifying Lebu-Corral as a zone of high retention with a high associated pre-recruitment index (particles originated in the Lebu-Corral spawning zone). The highest pre-recruitment values were mainly found in winter. The Constitución and Dichato spawning zones displayed a typical summer upwelling velocity pattern, while the Gulf of Arauco in summertime showed strong offshore and alongshore velocity components. The Lebu-Corral region in winter presented important near-surface cross-shore transport towards the coast (associated with downwelling events), this might be one of the major mechanisms leading to high retention levels and a high pre-recruitment index for Lebu-Corral spawning zone. The limitations of the modeling approach are discussed and put into perspective for future work.
A viable logarithmic f(R) model for inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amin, M.; Khalil, S.; Salah, M.
2016-08-18
Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2011-12-01
In the past few years, several contributions have begun to appear in the hydrologic literature that introduced and analyzed the benefits of using a signature based approach to watershed analysis. This signature-based approach abandons the standard single criteria model-data fitting paradigm in favor of a diagnostic approach that better extracts the available information from the available data. Despite the prospects of this new viewpoint, rather ad-hoc criteria have hitherto been proposed to improve watershed modeling. Here, we aim to provide a proper mathematical foundation to signature based analysis. We analyze the information content of different data transformation by analyzing their convergence speed with Markov Chain Monte Carlo (MCMC) simulation using the Generalized Likelihood function of Schousp and Vrugt (2010). We compare the information content of the original discharge data against a simple square root and Box-Cox transformation of the streamflow data. We benchmark these results against wavelet and flow duration curve transformations that temporally disaggregate the discharge data. Our results conclusive demonstrate that wavelet transformations and flow duration curves significantly reduce the information content of the streamflow data and consequently unnecessarily increase the uncertainty of the HYMOD model parameters. Hydrologic signatures thus need to be found in the original data, without temporal disaggregation.
An Application on Merton Model in the Non-efficient Market
NASA Astrophysics Data System (ADS)
Feng, Yanan; Xiao, Qingxian
Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.
Comparison of Models for Ball Bearing Dynamic Capacity and Life
NASA Technical Reports Server (NTRS)
Gupta, Pradeep K.; Oswald, Fred B.; Zaretsky, Erwin V.
2015-01-01
Generalized formulations for dynamic capacity and life of ball bearings, based on the models introduced by Lundberg and Palmgren and Zaretsky, have been developed and implemented in the bearing dynamics computer code, ADORE. Unlike the original Lundberg-Palmgren dynamic capacity equation, where the elastic properties are part of the life constant, the generalized formulations permit variation of elastic properties of the interacting materials. The newly updated Lundberg-Palmgren model allows prediction of life as a function of elastic properties. For elastic properties similar to those of AISI 52100 bearing steel, both the original and updated Lundberg-Palmgren models provide identical results. A comparison between the Lundberg-Palmgren and the Zaretsky models shows that at relatively light loads the Zaretsky model predicts a much higher life than the Lundberg-Palmgren model. As the load increases, the Zaretsky model provides a much faster drop off in life. This is because the Zaretsky model is much more sensitive to load than the Lundberg-Palmgren model. The generalized implementation where all model parameters can be varied provides an effective tool for future model validation and enhancement in bearing life prediction capabilities.
Abbott, D.H.; Levine, J.E.; Dumesic, D.A.
2017-01-01
Genetics-based studies of women with polycystic ovary syndrome (PCOS) implicate >20 PCOS risk genes that collectively account for <10% of PCOS. Clinicians now consider that either rare alleles or non-genetic, potentially epigenetic, developmental origins may contribute key pathogenic components to >90% of PCOS cases. Animal models convincingly demonstrate excess fetal testosterone exposure in females as a reliable, epigenetic, developmental origin for PCOS-like traits. In particular, nonhuman primates (NHPs) provide the most faithful emulation of PCOS-like pathophysiology, likely because of close similarities to humans in genomic, developmental, reproductive and metabolic characteristics, as well as aging. Recent appreciation of potential molecular mechanisms contributing to enhanced LH action in both PCOS women (GWAS-based) and PCOS-like monkeys (DNA methylation-based) suggest commonality in pathogenic origins. This review examines the translational relevance of NHP studies to PCOS, identifying characteristics of newborn females at risk for PCOS-like traits and potential prepubertal treatment interventions to ameliorate PCOS onset. PMID:27426126
Abbott, David H; Levine, Jon E; Dumesic, Daniel A
2016-01-01
Genetics-based studies of women with polycystic ovary syndrome (PCOS) implicate >20 PCOS risk genes that collectively account for <10% of PCOS. Clinicians now consider that either rare alleles or non-genetic, potentially epigenetic, developmental origins may contribute key pathogenic components to >90% of PCOS cases. Animal models convincingly demonstrate excess fetal testosterone exposure in females as a reliable, epigenetic, developmental origin for PCOS-like traits. In particular, nonhuman primates (NHPs) provide the most faithful emulation of PCOS-like pathophysiology, likely because of close similarities to humans in genomic, developmental, reproductive and metabolic characteristics, as well as aging. Recent appreciation of potential molecular mechanisms contributing to enhanced LH action in both PCOS women (GWAS-based) and PCOS-like monkeys (DNA methylation-based) suggest commonality in pathogenic origins. This review examines the translational relevance of NHP studies to PCOS, identifying characteristics of newborn females at risk for PCOS-like traits and potential prepubertal treatment interventions to ameliorate PCOS onset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yuhe; Mazur, Thomas R.; Green, Olga
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less
Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold
2016-01-01
Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123
Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold
2016-07-01
The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.
D Modeling of the Archaic Amphoras of Ionia
NASA Astrophysics Data System (ADS)
Denker, A.; Oniz, H.
2015-04-01
Few other regions offer such a rich collection of amphoras than the cities of Ionia. Throughout history amphoras of these cities had been spread all over the Mediterranean. Despite their common characteristics, amphora manufacturing cities of Ionia had their own distinctive styles that can be identified. They differed in details of shape and decoration. Each city produced an authentic type of amphora which served as a trademark of itself and enabled its attribution to where it originated from. That's why, amphoras provide important insight into commerce of old ages and yield evidence into ancient sailing routes. Owing to this our knowledge of the ancient trade is profoundly enriched. The following is based on the finds of amphoras which originated from the Ionian cities of Chios, Clazomenai, Lesbos, Miletus, and Samos. Starting from city-specific forms which offer interpretative advantages in provenancing, this article surveys the salient features of the regional forms and styles of the those Ionian cities. 3D modeling is utilized with the aim of bringing fresh glimpses of the investigated amphoras by showing how they originally looked. Due to their virtual indestructibility these models offer interpretative advantages by enabling experimental testing of hypotheses upon the finds without risking them. The 3D models in the following sections were reconstructed from numerous fragments of necks, handles, body sherds and bases. They convey in color- unlike the monochrome drawings which we were accustomed to-the texture, decoration, tint and the vitality of the amphoras of Ionia.
NASA Astrophysics Data System (ADS)
Krčmár, Roman; Šamaj, Ladislav
2018-01-01
The partition function of the symmetric (zero electric field) eight-vertex model on a square lattice can be formulated either in the original "electric" vertex format or in an equivalent "magnetic" Ising-spin format. In this paper, both electric and magnetic versions of the model are studied numerically by using the corner transfer matrix renormalization-group method which provides reliable data. The emphasis is put on the calculation of four specific critical exponents, related by two scaling relations, and of the central charge. The numerical method is first tested in the magnetic format, the obtained dependencies of critical exponents on the model's parameters agree with Baxter's exact solution, and weak universality is confirmed within the accuracy of the method due to the finite size of the system. In particular, the critical exponents η and δ are constant as required by weak universality. On the other hand, in the electric format, analytic formulas based on the scaling relations are derived for the critical exponents ηe and δe which agree with our numerical data. These exponents depend on the model's parameters which is evidence for the full nonuniversality of the symmetric eight-vertex model in the original electric formulation.
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
[The improvement of mixed human serum-induced anaphylactic reaction death model in guinea pigs].
Chen, Jiong-Yuan; Lai, Yue; Li, Dang-Ri; Yue, Xia; Wang, Hui-Jun
2012-12-01
To increase the death rate of fatal anaphylaxis in guinea pigs and the detectahie level of the tryptase of mast cell in hlood serum. Seventy-four guinea pigs were randomly divided into five groups: original model group, original model control group, improved model group, improved model control group, improved model with non-anaphylaxis group. Using mixed human serum as the allergen, the way of injection, sensitization and induction were improved. ELISA was used to detect the serum mast cell tryptase and total IgE in guinea pigs of each group. The death rate of fatal anaphylaxis in original model group was 54.2% with the different degree of hemopericardium. The severe pericardial tamponade appeared in 9 guinea pigs in original model group and original model control group. The death rate of fatal anaphylaxis in improved model group was 75% without pericardial tamponade. The concentration of the serum total IgE showed no statistically difference hetween original model group and original model control group (P > 0.05), hut the serum mast cell tryptase level was higher in the original model group than that in the original model control group (P > 0.05). The concentration of the serum total IgE and the serum mast cell tryptase level were significantly higher in improved model group than that in the improved model control group (P < 0.05). The death rate of the improved model significantly increases, which can provide effective animal model for the study of serum total IgE and mast cell tryptase.
2012-01-01
Background Members of the phylum Proteobacteria are most prominent among bacteria causing plant diseases that result in a diminution of the quantity and quality of food produced by agriculture. To ameliorate these losses, there is a need to identify infections in early stages. Recent developments in next generation nucleic acid sequencing and mass spectrometry open the door to screening plants by the sequences of their macromolecules. Such an approach requires the ability to recognize the organismal origin of unknown DNA or peptide fragments. There are many ways to approach this problem but none have emerged as the best protocol. Here we attempt a systematic way to determine organismal origins of peptides by using a machine learning algorithm. The algorithm that we implement is a Support Vector Machine (SVM). Result The amino acid compositions of proteobacterial proteins were found to be different from those of plant proteins. We developed an SVM model based on amino acid and dipeptide compositions to distinguish between a proteobacterial protein and a plant protein. The amino acid composition (AAC) based SVM model had an accuracy of 92.44% with 0.85 Matthews correlation coefficient (MCC) while the dipeptide composition (DC) based SVM model had a maximum accuracy of 94.67% and 0.89 MCC. We also developed SVM models based on a hybrid approach (AAC and DC), which gave a maximum accuracy 94.86% and a 0.90 MCC. The models were tested on unseen or untrained datasets to assess their validity. Conclusion The results indicate that the SVM based on the AAC and DC hybrid approach can be used to distinguish proteobacterial from plant protein sequences. PMID:23046503
Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models
NASA Astrophysics Data System (ADS)
Lammers, R. W.; Bledsoe, B. P.
2016-12-01
Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics
NASA Astrophysics Data System (ADS)
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics.
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
Chiral Gold Nanoclusters: Atomic Level Origins of Chirality.
Zeng, Chenjie; Jin, Rongchao
2017-08-04
Chiral nanomaterials have received wide interest in many areas, but the exact origin of chirality at the atomic level remains elusive in many cases. With recent significant progress in atomically precise gold nanoclusters (e.g., thiolate-protected Au n (SR) m ), several origins of chirality have been unveiled based upon atomic structures determined by using single-crystal X-ray crystallography. The reported chiral Au n (SR) m structures explicitly reveal a predominant origin of chirality that arises from the Au-S chiral patterns at the metal-ligand interface, as opposed to the chiral arrangement of metal atoms in the inner core (i.e. kernel). In addition, chirality can also be introduced by a chiral ligand, manifested in the circular dichroism response from metal-based electronic transitions other than the ligand's own transition(s). Lastly, the chiral arrangement of carbon tails of the ligands has also been discovered in a very recent work on chiral Au 133 (SR) 52 and Au 246 (SR) 80 nanoclusters. Overall, the origins of chirality discovered in Au n (SR) m nanoclusters may provide models for the understanding of chirality origins in other types of nanomaterials and also constitute the basis for the development of various applications of chiral nanoparticles. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
EFFECTIVE REMOVAL METHOD OF ILLEGAL PARKING BICYCLES BASED ON THE QUANTITATIVE CHANGE AFTER REMOVAL
NASA Astrophysics Data System (ADS)
Toi, Satoshi; Kajita, Yoshitaka; Nishikawa, Shuichirou
This study aims to find an effective removal method of illegal parking bicycles based on the analysis on the numerical change of illegal bicycles. And then, we built the time and space quantitative distribution model of illegal parking bicycles after removal, considering the logistic increase of illegal parking bicycles, several behaviors concerning of direct return or indirect return to the original parking place and avoidance of the original parking place, based on the investigation of real condition of illegal bicycle parking at TENJIN area in FUKUOKA city. Moreover, we built the simulation model including above-mentioned model, and calculated the number of illegal parking bicycles when we change the removal frequency and the number of removal at one time. The next interesting four results were obtained. (1) Recovery speed from removal the illegal parking bicycles differs by each zone. (2) Thorough removal is effective to keep the number of illegal parking bicycles lower level. (3) Removal at one zone causes the increase of bicycles at other zones where the level of illegal parking is lower. (4) The relationship between effects and costs of removing the illegal parking bicycles was clarified.
Curriculum Guides for Level I and Level II: National Manpower Model.
ERIC Educational Resources Information Center
National Inst. on Mental Retardation, Toronto (Ontario).
Curriculum guides to levels I and II of the Canadian National Manpower Model, which elaborate on content originally presented in 1971, are provided for personnel training programs in the field of mental retardation and related handicapping areas. The guides are said to be based on a philosophy that demands society's acceptance of retarded and…
The TESSA OER Experience: Building Sustainable Models of Production and User Implementation
ERIC Educational Resources Information Center
Wolfenden, Freda
2008-01-01
This paper offers a review of the origins, design strategy and implementation plans of the Teacher Education in Sub-Saharan Africa (TESSA) research and development programme. The programme is working to develop new models of teacher education, particularly school based training, including the creation of a programme webspace and an extensive bank…
Integrated Formal Analysis of Timed-Triggered Ethernet
NASA Technical Reports Server (NTRS)
Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam
2012-01-01
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.
The Metabolic World: Sugars as an Energized Carbon Substrate for Prebiotic and Biotic Synthesis
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
1996-01-01
To understand the origin of metabolism and biopolymer synthesis we investigated the energy sources that drive anabolic metabolism. We found that biosynthesis of amino acids and lipids from sugars is driven bz the free energy of redox disproportionation of carbon (see discussion or next page). The indispensable role of sugar disproportionation in the biosynthesis of amino acids and lipids suggests that the origin of life uses the same chemical engine, and was therefore based on nonenzymatic redox disproportionation reactions of sugars that occurred in the presence o ammonia and hydrogen sulfide. The chemistry of this 'metabolic' model of the origin of life is described.
Research on potential user identification model for electric energy substitution
NASA Astrophysics Data System (ADS)
Xia, Huaijian; Chen, Meiling; Lin, Haiying; Yang, Shuo; Miao, Bo; Zhu, Xinzhi
2018-01-01
The implementation of energy substitution plays an important role in promoting the development of energy conservation and emission reduction in china. Energy service management platform of alternative energy users based on the data in the enterprise production value, product output, coal and other energy consumption as a potential evaluation index, using principal component analysis model to simplify the formation of characteristic index, comprehensive index contains the original variables, and using fuzzy clustering model for the same industry user’s flexible classification. The comprehensive index number and user clustering classification based on constructed particle optimization neural network classification model based on the user, user can replace electric potential prediction. The results of an example show that the model can effectively predict the potential of users’ energy potential.
ERIC Educational Resources Information Center
Visagan, Ravindran; Xiang, Ally; Lamar, Melissa
2012-01-01
We compared the original deck-based model of advantageous decision making assessed with the Iowa Gambling Task (IGT) with a trial-based approach across behavioral and physiological outcomes in 33 younger adults (15 men, 18 women; 22.2 [plus or minus] 3.7 years of age). One administration of the IGT with simultaneous measurement of skin conductance…
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Local-world and cluster-growing weighted networks with controllable clustering
NASA Astrophysics Data System (ADS)
Yang, Chun-Xia; Tang, Min-Xuan; Tang, Hai-Qiang; Deng, Qiang-Qiang
2014-12-01
We constructed an improved weighted network model by introducing local-world selection mechanism and triangle coupling mechanism based on the traditional BBV model. The model gives power-law distributions of degree, strength and edge weight and presents the linear relationship both between the degree and strength and between the degree and the clustering coefficient. Particularly, the model is equipped with an ability to accelerate the speed increase of strength exceeding that of degree. Besides, the model is more sound and efficient in tuning clustering coefficient than the original BBV model. Finally, based on our improved model, we analyze the virus spread process and find that reducing the size of local-world has a great inhibited effect on virus spread.
Kaganov, A Sh; Kir'yanov, P A
2015-01-01
The objective of the present publication was to discuss the possibility of application of cybernetic modeling methods to overcome the apparent discrepancy between two kinds of the speech records, viz. initial ones (e.g. obtained in the course of special investigation activities) and the voice prints obtained from the persons subjected to the criminalistic examination. The paper is based on the literature sources and the materials of original criminalistics expertises performed by the authors.
Beyond single-stream with the Schrödinger method
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael
2016-10-01
We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.
Interest-Driven Model for Human Dynamics
NASA Astrophysics Data System (ADS)
Shang, Ming-Sheng; Chen, Guan-Xiong; Dai, Shuang-Xing; Wang, Bing-Hong; Zhou, Tao
2010-04-01
Empirical observations indicate that the interevent time distribution of human actions exhibits heavy-tailed features. The queuing model based on task priorities is to some extent successful in explaining the origin of such heavy tails, however, it cannot explain all the temporal statistics of human behavior especially for the daily entertainments. We propose an interest-driven model, which can reproduce the power-law distribution of interevent time. The exponent can be analytically obtained and is in good accordance with the simulations. This model well explains the observed relationship between activities and power-law exponents, as reported recently for web-based behavior and the instant message communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaohu; Shi, Di; Wang, Zhiwei
Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less
Centrosome-Based Mechanisms, Prognostics and Therapeutics in Prostate Cancer
2006-12-01
progression of prostate carcinomas. The specific aims of the original proposal were designed to test several features of this model . 1. Are centrosome...features of this model . 1. Are centrosome defects present in early prostate cancer and can they predict aggressive disease? 2. Do pericentrin’s...cells, supports this model . The ability to block the cell cycle in prostate cells by depletion of any of 14 centrosome proteins identifies several
Pan, Yu; Zhang, Ji; Li, Hong; Wang, Yuan-Zhong; Li, Wan-Yi
2016-10-01
Macamides with a benzylalkylamide nucleus are characteristic and major bioactive compounds in the functional food maca (Lepidium meyenii Walp). The aim of this study was to explore variations in macamide content among maca from China and Peru. Twenty-seven batches of maca hypocotyls with different phenotypes, sampled from different geographical origins, were extracted and profiled by liquid chromatography with ultraviolet detection/tandem mass spectrometry (LC-UV/MS/MS). Twelve macamides were identified by MS operated in multiple scanning modes. Similarity analysis showed that maca samples differed significantly in their macamide fingerprinting. Partial least squares discriminant analysis (PLS-DA) was used to differentiate samples according to their geographical origin and to identify the most relevant variables in the classification model. The prediction accuracy for raw maca was 91% and five macamides were selected and considered as chemical markers for sample classification. When combined with a PLS-DA model, characteristic fingerprinting based on macamides could be recommended for labelling for the authentication of maca from different geographical origins. The results provided potential evidence for the relationships between environmental or other factors and distribution of macamides. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Higham, Charles F W; Douka, Katerina; Higham, Thomas F G
2015-01-01
There are two models for the origins and timing of the Bronze Age in Southeast Asia. The first centres on the sites of Ban Chiang and Non Nok Tha in Northeast Thailand. It places the first evidence for bronze technology in about 2000 B.C., and identifies the origin by means of direct contact with specialists of the Seima Turbino metallurgical tradition of Central Eurasia. The second is based on the site of Ban Non Wat, 280 km southwest of Ban Chiang, where extensive radiocarbon dating places the transition into the Bronze Age in the 11th century B.C. with likely origins in a southward expansion of technological expertise rooted in the early states of the Yellow and Yangtze valleys, China. We have redated Ban Chiang and Non Nok Tha, as well as the sites of Ban Na Di and Ban Lum Khao, and here present 105 radiocarbon determinations that strongly support the latter model. The statistical analysis of the results using a Bayesian approach allows us to examine the data at a regional level, elucidate the timing of arrival of copper base technology in Southeast Asia and consider its social impact.
A System of Two Polymerases - A Model for the Origin of Life
NASA Astrophysics Data System (ADS)
Kunin, Victor
2000-10-01
What was the first living molecule - RNA or protein? This question embodies the major disagreement in studies on the origin of life. The fact that in contemporary cells RNA polymerase is a protein and peptidyl transferase consists of RNA suggests the existence of a mutual catalytic dependence between these two kinds of biopolymers. I suggest that this dependence is a `frozen accident', a remnant from the first living system. This system is proposed to be a combination of an RNA molecule capable of catalyzing amino acid polymerization and the resulting protein functioning as an RNA-dependent RNA polymerase. The specificity of the protein synthesis is thought to be achieved by the composition of the surrounding medium and the specificity of the RNA synthesis - by Watson - Crick base pairing. Despite its apparent simplicity, the system possesses a great potential to evolve into a primitive ribosome and further to life, as it is seen today. This model provides a possible explanation for the origin of the interaction between nucleic acids and protein. Based on the suggested system, I propose a new definition of life as a system of nucleic acid and protein polymerases with a constant supply of monomers, energy and protection.
Higham, Charles F. W.
2015-01-01
There are two models for the origins and timing of the Bronze Age in Southeast Asia. The first centres on the sites of Ban Chiang and Non Nok Tha in Northeast Thailand. It places the first evidence for bronze technology in about 2000 B.C., and identifies the origin by means of direct contact with specialists of the Seima Turbino metallurgical tradition of Central Eurasia. The second is based on the site of Ban Non Wat, 280 km southwest of Ban Chiang, where extensive radiocarbon dating places the transition into the Bronze Age in the 11th century B.C. with likely origins in a southward expansion of technological expertise rooted in the early states of the Yellow and Yangtze valleys, China. We have redated Ban Chiang and Non Nok Tha, as well as the sites of Ban Na Di and Ban Lum Khao, and here present 105 radiocarbon determinations that strongly support the latter model. The statistical analysis of the results using a Bayesian approach allows us to examine the data at a regional level, elucidate the timing of arrival of copper base technology in Southeast Asia and consider its social impact. PMID:26384011
Maximum Entropy Principle for Transportation
NASA Astrophysics Data System (ADS)
Bilich, F.; DaSilva, R.
2008-11-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
[Determination of wine original regions using information fusion of NIR and MIR spectroscopy].
Xiang, Ling-Li; Li, Meng-Hua; Li, Jing-Mingz; Li, Jun-Hui; Zhang, Lu-Da; Zhao, Long-Lian
2014-10-01
Geographical origins of wine grapes are significant factors affecting wine quality and wine prices. Tasters' evaluation is a good method but has some limitations. It is important to discriminate different wine original regions quickly and accurately. The present paper proposed a method to determine wine original regions based on Bayesian information fusion that fused near-infrared (NIR) transmission spectra information and mid-infrared (MIR) ATR spectra information of wines. This method improved the determination results by expanding the sources of analysis information. NIR spectra and MIR spectra of 153 wine samples from four different regions of grape growing were collected by near-infrared and mid-infrared Fourier transform spe trometer separately. These four different regions are Huailai, Yantai, Gansu and Changli, which areall typical geographical originals for Chinese wines. NIR and MIR discriminant models for wine regions were established using partial least squares discriminant analysis (PLS-DA) based on NIR spectra and MIR spectra separately. In PLS-DA, the regions of wine samples are presented in group of binary code. There are four wine regions in this paper, thereby using four nodes standing for categorical variables. The output nodes values for each sample in NIR and MIR models were normalized first. These values stand for the probabilities of each sample belonging to each category. They seemed as the input to the Bayesian discriminant formula as a priori probability value. The probabilities were substituteed into the Bayesian formula to get posterior probabilities, by which we can judge the new class characteristics of these samples. Considering the stability of PLS-DA models, all the wine samples were divided into calibration sets and validation sets randomly for ten times. The results of NIR and MIR discriminant models of four wine regions were as follows: the average accuracy rates of calibration sets were 78.21% (NIR) and 82.57% (MIR), and the average accuracy rates of validation sets were 82.50% (NIR) and 81.98% (MIR). After using the method proposed in this paper, the accuracy rates of calibration and validation changed to 87.11% and 90.87% separately, which all achieved better results of determination than individual spectroscopy. These results suggest that Bayesian information fusion of NIR and MIR spectra is feasible for fast identification of wine original regions.
Utilizing NX Advanced Simulation for NASA's New Mobile Launcher for Ares-l
NASA Technical Reports Server (NTRS)
Brown, Christopher
2010-01-01
This slide presentation reviews the use of NX to simulate the new Mobile Launcher (ML) for the Ares-I. It includes: a comparison of the sizes of the Saturn 5, the Space Shuttle, the Ares I, and the Ares V, with the height, and payload capability; the loads control plan; drawings of the base framing, the underside of the ML, beam arrangement, and the finished base and the origin of the 3D CAD data. It also reviews the modeling approach, meshing. the assembly Finite Element Modeling, the model summary. and beam improvements.
NASA Astrophysics Data System (ADS)
Singh, Subhash; Mohapatra, Y. N.
2017-06-01
We have investigated switch-on drain-source current transients in fully solution-processed thin film transistors based on 6,13-bis(triisopropylsilylethynyl) pentacene (TIPS-pentacene) using cross-linked poly-4-vinylphenol as a dielectric. We show that the nature of the transient (increasing or decreasing) depends on both the temperature and the amplitude of the switching pulse at the gate. The isothermal transients are analyzed spectroscopically in a time domain to extract the degree of non-exponentiality and its possible origin in trap kinetics. We propose a phenomenological model in which the exchange of electrons between interfacial ions and traps controls the nature of the drain current transients dictated by the Fermi level position. The origin of interfacial ions is attributed to the essential fabrication step of UV-ozone treatment of the dielectric prior to semiconductor deposition.
A Problem-Based Approach to Elastic Wave Propagation: The Role of Constraints
ERIC Educational Resources Information Center
Fazio, Claudio; Guastella, Ivan; Tarantino, Giovanni
2009-01-01
A problem-based approach to the teaching of mechanical wave propagation, focused on observation and measurement of wave properties in solids and on modelling of these properties, is presented. In particular, some experimental results, originally aimed at measuring the propagation speed of sound waves in metallic rods, are used in order to deepen…
ERIC Educational Resources Information Center
Oh, Jun-Young
2014-01-01
Constructing explanations and participating in argumentative discourse are seen as essential practices of scientific inquiry. The objective of this study was to explore the elements and origins of pre-service secondary science teachers' alternative conceptions of tidal phenomena based on the elements used in Toulmin's Argument Model through…
A Process Approach to Community-Based Education: The People's Free University of Saskatchewan
ERIC Educational Resources Information Center
Woodhouse, Howard
2005-01-01
On the basis of insights provided by Whitehead and John Cobb, I show how the People's Free University of Saskatchewan (PFU) is a working model of free, open, community-based education that embodies several characteristics of Whitehead's philosophy of education. Formed in opposition to the growing commercialization at the original "people?s…
Auger, Jean-Philippe; Fittipaldi, Nahuel; Benoit-Biancamano, Marie-Odile; Segura, Mariela; Gottschalk, Marcelo
2016-01-01
Multilocus sequence typing previously identified three predominant sequence types (STs) of Streptococcus suis serotype 2: ST1 strains predominate in Eurasia while North American (NA) strains are generally ST25 and ST28. However, ST25/ST28 and ST1 strains have also been isolated in Asia and NA, respectively. Using a well-standardized mouse model of infection, the virulence of strains belonging to different STs and different geographical origins was evaluated. Results demonstrated that although a certain tendency may be observed, S. suis serotype 2 virulence is difficult to predict based on ST and geographical origin alone; strains belonging to the same ST presented important differences of virulence and did not always correlate with origin. The only exception appears to be NA ST28 strains, which were generally less virulent in both systemic and central nervous system (CNS) infection models. Persistent and high levels of bacteremia accompanied by elevated CNS inflammation are required to cause meningitis. Although widely used, in vitro tests such as phagocytosis and killing assays require further standardization in order to be used as predictive tests for evaluating virulence of strains. The use of strains other than archetypal strains has increased our knowledge and understanding of the S. suis serotype 2 population dynamics. PMID:27409640
The quantization of the chiral Schwinger model based on the BFT - BFV formalism
NASA Astrophysics Data System (ADS)
Kim, Won T.; Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
1997-03-01
We apply the newly improved Batalin - Fradkin - Tyutin (BFT) Hamiltonian method to the chiral Schwinger model in the case of the regularization ambiguity a>1. We show that one can systematically construct the first class constraints by the BFT Hamiltonian method, and also show that the well-known Dirac brackets of the original phase space variables are exactly the Poisson brackets of the corresponding modified fields in the extended phase space. Furthermore, we show that the first class Hamiltonian is simply obtained by replacing the original fields in the canonical Hamiltonian by these modified fields. Performing the momentum integrations, we obtain the corresponding first class Lagrangian in the configuration space.
Reporter Concerns in 300 Mode-Related Incident Reports from NASA's Aviation Safety Reporting System
NASA Technical Reports Server (NTRS)
McGreevy, Michael W.
1996-01-01
A model has been developed which represents prominent reporter concerns expressed in the narratives of 300 mode-related incident reports from NASA's Aviation Safety Reporting System (ASRS). The model objectively quantifies the structure of concerns which persist across situations and reporters. These concerns are described and illustrated using verbatim sentences from the original narratives. Report accession numbers are included with each sentence so that concerns can be traced back to the original reports. The results also include an inventory of mode names mentioned in the narratives, and a comparison of individual and joint concerns. The method is based on a proximity-weighted co-occurrence metric and object-oriented complexity reduction.
Hagerty, Thomas A; Samuels, William; Norcini-Pala, Andrea; Gigliotti, Eileen
2017-04-01
A confirmatory factor analysis of data from the responses of 12,436 patients to 16 items on the Consumer Assessment of Healthcare Providers and Systems-Hospital survey was used to test a latent factor structure based on Peplau's middle-range theory of interpersonal relations. A two-factor model based on Peplau's theory fit these data well, whereas a three-factor model also based on Peplau's theory fit them excellently and provided a suitable alternate factor structure for the data. Though neither the two- nor three-factor model fit as well as the original factor structure, these results support using Peplau's theory to demonstrate nursing's extensive contribution to the experiences of hospitalized patients.
Electrostatic interaction map reveals a new binding position for tropomyosin on F-actin.
Rynkiewicz, Michael J; Schott, Veronika; Orzechowski, Marek; Lehman, William; Fischer, Stefan
2015-12-01
Azimuthal movement of tropomyosin around the F-actin thin filament is responsible for muscle activation and relaxation. Recently a model of αα-tropomyosin, derived from molecular-mechanics and electron microscopy of different contractile states, showed that tropomyosin is rather stiff and pre-bent to present one specific face to F-actin during azimuthal transitions. However, a new model based on cryo-EM of troponin- and myosin-free filaments proposes that the interacting-face of tropomyosin can differ significantly from that in the original model. Because resolution was insufficient to assign tropomyosin side-chains, the interacting-face could not be unambiguously determined. Here, we use structural analysis and energy landscapes to further examine the proposed models. The observed bend in seven crystal structures of tropomyosin is much closer in direction and extent to the original model than to the new model. Additionally, we computed the interaction map for repositioning tropomyosin over the F-actin surface, but now extended over a much larger surface than previously (using the original interacting-face). This map shows two energy minima-one corresponding to the "blocked-state" as in the original model, and the other related by a simple 24 Å translation of tropomyosin parallel to the F-actin axis. The tropomyosin-actin complex defined by the second minimum fits perfectly into the recent cryo-EM density, without requiring any change in the interacting-face. Together, these data suggest that movement of tropomyosin between regulatory states does not require interacting-face rotation. Further, they imply that thin filament assembly may involve an interplay between initially seeded tropomyosin molecules growing from distinct binding-site regions on actin.
NASA Astrophysics Data System (ADS)
He, Yi; Liwo, Adam; Scheraga, Harold A.
2015-12-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Scientific and Technical Development of the Next Generation Space Telescope
NASA Technical Reports Server (NTRS)
Burg, Richard
2003-01-01
The Next Generation Space Telescope (NGST) is part of the Origins program and is the key mission to discover the origins of galaxies in the Universe. It is essential that scientific requirements be translated into technical specifications at the beginning of the program and that there is technical participation by astronomers in the design and modeling of the observatory. During the active time period of this grant, the PI participated in the NGST program at GSFC by participating in the development of the Design Reference Mission, the development of the full end-to-end model of the observatory, the design trade-off based on the modeling, the Science Instrument Module definition and modeling, the study of proto-mission and test-bed development, and by participating in meetings including quarterly reviews and support of the NGST SWG. This work was documented in a series of NGST Monographs that are available on the NGST web site.
Kollanus, Virpi; Prank, Marje; Gens, Alexandra; Soares, Joana; Vira, Julius; Kukkonen, Jaakko; Sofiev, Mikhail; Salonen, Raimo O.; Lanki, Timo
2016-01-01
Background: Vegetation fires can release substantial quantities of fine particles (PM2.5), which are harmful to health. The fire smoke may be transported over long distances and can cause adverse health effects over wide areas. Objective: We aimed to assess annual mortality attributable to short-term exposures to vegetation fire–originated PM2.5 in different regions of Europe. Methods: PM2.5 emissions from vegetation fires in Europe in 2005 and 2008 were evaluated based on Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data on fire radiative power. Atmospheric transport of the emissions was modeled using the System for Integrated modeLling of Atmospheric coMposition (SILAM) chemical transport model. Mortality impacts were estimated for 27 European countries based on a) modeled daily PM2.5 concentrations and b) population data, both presented in a 50 × 50 km2 spatial grid; c) an exposure–response function for short-term PM2.5 exposure and daily nonaccidental mortality; and d) country-level data for background mortality risk. Results: In the 27 countries overall, an estimated 1,483 and 1,080 premature deaths were attributable to the vegetation fire–originated PM2.5 in 2005 and 2008, respectively. Estimated impacts were highest in southern and eastern Europe. However, all countries were affected by fire-originated PM2.5, and even the lower concentrations in western and northern Europe contributed substantially (~ 30%) to the overall estimate of attributable mortality. Conclusions: Our assessment suggests that air pollution caused by PM2.5 released from vegetation fires is a notable risk factor for public health in Europe. Moreover, the risk can be expected to increase in the future as climate change proceeds. This factor should be taken into consideration when evaluating the overall health and socioeconomic impacts of these fires. Citation: Kollanus V, Prank M, Gens A, Soares J, Vira J, Kukkonen J, Sofiev M, Salonen RO, Lanki T. 2017. Mortality due to vegetation fire–originated PM2.5 exposure in Europe—assessment for the years 2005 and 2008. Environ Health Perspect 125:30–37; http://dx.doi.org/10.1289/EHP194 PMID:27472655
Kollanus, Virpi; Prank, Marje; Gens, Alexandra; Soares, Joana; Vira, Julius; Kukkonen, Jaakko; Sofiev, Mikhail; Salonen, Raimo O; Lanki, Timo
2017-01-01
Vegetation fires can release substantial quantities of fine particles (PM2.5), which are harmful to health. The fire smoke may be transported over long distances and can cause adverse health effects over wide areas. We aimed to assess annual mortality attributable to short-term exposures to vegetation fire-originated PM2.5 in different regions of Europe. PM2.5 emissions from vegetation fires in Europe in 2005 and 2008 were evaluated based on Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data on fire radiative power. Atmospheric transport of the emissions was modeled using the System for Integrated modeLling of Atmospheric coMposition (SILAM) chemical transport model. Mortality impacts were estimated for 27 European countries based on a) modeled daily PM2.5 concentrations and b) population data, both presented in a 50 × 50 km2 spatial grid; c) an exposure-response function for short-term PM2.5 exposure and daily nonaccidental mortality; and d) country-level data for background mortality risk. In the 27 countries overall, an estimated 1,483 and 1,080 premature deaths were attributable to the vegetation fire-originated PM2.5 in 2005 and 2008, respectively. Estimated impacts were highest in southern and eastern Europe. However, all countries were affected by fire-originated PM2.5, and even the lower concentrations in western and northern Europe contributed substantially (~ 30%) to the overall estimate of attributable mortality. Our assessment suggests that air pollution caused by PM2.5 released from vegetation fires is a notable risk factor for public health in Europe. Moreover, the risk can be expected to increase in the future as climate change proceeds. This factor should be taken into consideration when evaluating the overall health and socioeconomic impacts of these fires. Citation: Kollanus V, Prank M, Gens A, Soares J, Vira J, Kukkonen J, Sofiev M, Salonen RO, Lanki T. 2017. Mortality due to vegetation fire-originated PM2.5 exposure in Europe-assessment for the years 2005 and 2008. Environ Health Perspect 125:30-37; http://dx.doi.org/10.1289/EHP194.
Marti, Sarah; Straumann, Dominik; Glasauer, Stefan
2005-04-01
Various hypotheses on the origin of cerebellar downbeat nystagmus (DBN) have been presented; the exact pathomechanism, however, is still not known. Based on previous anatomical and electrophysiological studies, we propose that an asymmetry in the distribution of on-directions of vertical gaze-velocity Purkinje cells leads to spontaneous upward ocular drift in cerebellar disease, and therefore, to DBN. Our hypothesis is supported by a computational model for vertical eye movements.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Contextualising primate origins--an ecomorphological framework.
Soligo, Christophe; Smaers, Jeroen B
2016-04-01
Ecomorphology - the characterisation of the adaptive relationship between an organism's morphology and its ecological role - has long been central to theories of the origin and early evolution of the primate order. This is exemplified by two of the most influential theories of primate origins: Matt Cartmill's Visual Predation Hypothesis, and Bob Sussman's Angiosperm Co-Evolution Hypothesis. However, the study of primate origins is constrained by the absence of data directly documenting the events under investigation, and has to rely instead on a fragmentary fossil record and the methodological assumptions inherent in phylogenetic comparative analyses of extant species. These constraints introduce particular challenges for inferring the ecomorphology of primate origins, as morphology and environmental context must first be inferred before the relationship between the two can be considered. Fossils can be integrated in comparative analyses and observations of extant model species and laboratory experiments of form-function relationships are critical for the functional interpretation of the morphology of extinct species. Recent developments have led to important advancements, including phylogenetic comparative methods based on more realistic models of evolution, and improved methods for the inference of clade divergence times, as well as an improved fossil record. This contribution will review current perspectives on the origin and early evolution of primates, paying particular attention to their phylogenetic (including cladistic relationships and character evolution) and environmental (including chronology, geography, and physical environments) contextualisation, before attempting an up-to-date ecomorphological synthesis of primate origins. © 2016 Anatomical Society.
Kovács, R; Miháltz, P; Csikor, Zs
2007-01-01
The application of an ASM1-based mathematical model for the modeling of autothermal thermophilic aerobic digestion is demonstrated. Based on former experimental results the original ASM1 was extended by the activation of facultative thermophiles from the feed sludge and a new component, the thermophilic biomass was introduced. The resulting model was calibrated in the temperature range of 20-60 degrees C. The temperature dependence of the growth and decay rates in the model is given in terms of the slightly modified Arrhenius and Topiwala-Sinclair equations. The capabilities of the calibrated model in realistic ATAD scenarios are demonstrated with a focus on autothermal properties of ATAD systems at different conditions.
NASA Technical Reports Server (NTRS)
Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)
2001-01-01
The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.
Bending of an Infinite beam on a base with two parameters in the absence of a part of the base
NASA Astrophysics Data System (ADS)
Aleksandrovskiy, Maxim; Zaharova, Lidiya
2018-03-01
Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.
Query-seeded iterative sequence similarity searching improves selectivity 5–20-fold
Li, Weizhong; Lopez, Rodrigo
2017-01-01
Abstract Iterative similarity search programs, like psiblast, jackhmmer, and psisearch, are much more sensitive than pairwise similarity search methods like blast and ssearch because they build a position specific scoring model (a PSSM or HMM) that captures the pattern of sequence conservation characteristic to a protein family. But models are subject to contamination; once an unrelated sequence has been added to the model, homologs of the unrelated sequence will also produce high scores, and the model can diverge from the original protein family. Examination of alignment errors during psiblast PSSM contamination suggested a simple strategy for dramatically reducing PSSM contamination. psiblast PSSMs are built from the query-based multiple sequence alignment (MSA) implied by the pairwise alignments between the query model (PSSM, HMM) and the subject sequences in the library. When the original query sequence residues are inserted into gapped positions in the aligned subject sequence, the resulting PSSM rarely produces alignment over-extensions or alignments to unrelated sequences. This simple step, which tends to anchor the PSSM to the original query sequence and slightly increase target percent identity, can reduce the frequency of false-positive alignments more than 20-fold compared with psiblast and jackhmmer, with little loss in search sensitivity. PMID:27923999
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
ERIC Educational Resources Information Center
Nikolaidou, Georgia N.
2012-01-01
This exploratory work describes and analyses the collaborative interactions that emerge during computer-based music composition in the primary school. The study draws on socio-cultural theories of learning, originated within Vygotsky's theoretical context, and proposes a new model, namely Computer-mediated Praxis and Logos under Synergy (ComPLuS).…
ERIC Educational Resources Information Center
Bidin, Zainol; Hashim, Mohd Farid Asraf Md; Sharif, Zakiyah; Shamsudin, Faridahwati Mohd
2011-01-01
Purpose: This study sought to investigate the factors that influence students' intention to use the Internet for academic purposes in Universiti Utara Malaysia. This study applies theory of planned behaviour (TPB) as the base model. The model employed the original variables from the theory i.e. attitudes, subjective norms, perceived behavioural…
Mathematical modeling of the aerodynamic characteristics in flight dynamics
NASA Technical Reports Server (NTRS)
Tobak, M.; Chapman, G. T.; Schiff, L. B.
1984-01-01
Basic concepts involved in the mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers are reviewed. The original formulation of an aerodynamic response in terms of nonlinear functionals is shown to be compatible with a derivation based on the use of nonlinear functional expansions. Extensions of the analysis through its natural connection with ideas from bifurcation theory are indicated.
Oirat Tones and Break Indices (O-ToBI): Intonational Structure of the Oirat Language
ERIC Educational Resources Information Center
Indjieva, Elena
2009-01-01
This doctoral dissertation describes intonation patterns in Spoken Oirat (SO) and proposes a model of the intonational structure of Oirat. The proposed prosodic model is represented in the framework of Oirat Tones and Break Indices (O-ToBI), which is based on the design principles of the original English ToBI (Beckman & Ayers 1994; Beckman…
McKemmish, Laura K; Reimers, Jeffrey R; McKenzie, Ross H; Mark, Alan E; Hush, Noel S
2009-08-01
Penrose and Hameroff have argued that the conventional models of a brain function based on neural networks alone cannot account for human consciousness, claiming that quantum-computation elements are also required. Specifically, in their Orchestrated Objective Reduction (Orch OR) model [R. Penrose and S. R. Hameroff, J. Conscious. Stud. 2, 99 (1995)], it is postulated that microtubules act as quantum processing units, with individual tubulin dimers forming the computational elements. This model requires that the tubulin is able to switch between alternative conformational states in a coherent manner, and that this process be rapid on the physiological time scale. Here, the biological feasibility of the Orch OR proposal is examined in light of recent experimental studies on microtubule assembly and dynamics. It is shown that the tubulins do not possess essential properties required for the Orch OR proposal, as originally proposed, to hold. Further, we consider also recent progress in the understanding of the long-lived coherent motions in biological systems, a feature critical to Orch OR, and show that no reformation of the proposal based on known physical paradigms could lead to quantum computing within microtubules. Hence, the Orch OR model is not a feasible explanation of the origin of consciousness.
Yamamoto, H; Kojima, Y; Okuyama, T; Abasolo, W P; Gril, J
2002-08-01
In this study, a basic model is introduced to describe the biomechanical properties of the wood from the viewpoint of the composite structure of its cell wall. First, the mechanical interaction between the cellulose microfibril (CMF) as a bundle framework and the lignin-hemicellulose as a matrix (MT) skeleton in the secondary wall is formulated based on "the two phase approximation." Thereafter, the origins of (1) tree growth stress, (2) shrinkage or swelling anisotropy of the wood, and (3) moisture dependency of the Young's modulus of wood along the grain were simulated using the newly introduced model. Through the model formulation; (1) the behavior of the cellulose microfibril (CMF) and the matrix substance (MT) during cell wall maturation was estimated; (2) the moisture reactivity of each cell wall constituent was investigated; and (3) a realistic model of the fine composite structure of the matured cell wall was proposed. Thus, it is expected that the fine structure and internal property of each cell wall constituent can be estimated through the analyses of the macroscopic behaviors of wood based on the two phase approximation.
[Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].
Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang
2016-02-01
Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
Characterizing hydrochemical properties of springs in Taiwan based on their geological origins.
Jang, Cheng-Shin; Chen, Jui-Sheng; Lin, Yun-Bin; Liu, Chen-Wuing
2012-01-01
This study was performed to characterize hydrochemical properties of springs based on their geological origins in Taiwan. Stepwise discriminant analysis (DA) was used to establish a linear classification model of springs using hydrochemical parameters. Two hydrochemical datasets-ion concentrations and relative proportions of equivalents per liter of major ions-were included to perform prediction of the geological origins of springs. Analyzed results reveal that DA using relative proportions of equivalents per liter of major ions yields a 95.6% right assignation, which is superior to DA using ion concentrations. This result indicates that relative proportions of equivalents of major hydrochemical parameters in spring water are more highly associated with the geological origins than ion concentrations do. Low percentages of Na(+) equivalents are common properties of springs emerging from acid-sulfate and neutral-sulfate igneous rock. Springs emerging from metamorphic rock show low percentages of Cl( - ) equivalents and high percentages of HCO[Formula: see text] equivalents, and springs emerging from sedimentary rock exhibit high Cl( - )/SO(2-)(4) ratios.
Improving mixing efficiency of a polymer micromixer by use of a plastic shim divider
NASA Astrophysics Data System (ADS)
Li, Lei; Lee, L. James; Castro, Jose M.; Yi, Allen Y.
2010-03-01
In this paper, a critical modification to a polymer based affordable split-and-recombination static micromixer is described. To evaluate the improvement, both the original and the modified design were carefully investigated using an experimental setup and numerical modeling approach. The structure of the micromixer was designed to take advantage of the process capabilities of both ultraprecision micromachining and microinjection molding process. Specifically, the original and the modified design were numerically simulated using commercial finite element method software ANSYS CFX to assist the re-designing of the micromixers. The simulation results have shown that both designs are capable of performing mixing while the modified design has a much improved performance. Mixing experiments with two different fluids were carried out using the original and the modified mixers again showed a significantly improved mixing uniformity by the latter. The measured mixing coefficient for the original design was 0.11, and for the improved design it was 0.065. The developed manufacturing process based on ultraprecision machining and microinjection molding processes for device fabrication has the advantage of high-dimensional precision, low cost and manufacturing flexibility.
Prediction of stock markets by the evolutionary mix-game model
NASA Astrophysics Data System (ADS)
Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping
2008-06-01
This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.
Evaluation of speaker de-identification based on voice gender and age conversion
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich
2018-03-01
Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.
Peters, Kenneth E.; Magoon, Leslie B.; Lampe, Carolyn; Scheirer, Allegra Hosford; Lillis, Paul G.; Gautier, Donald L.
2008-01-01
A calibrated numerical model depicts the geometry and three-dimensional (3-D) evolution of petroleum systems through time (4-D) in a 249 x 309 km (155 x 192 mi) area covering all of the San Joaquin Basin Province of California. Model input includes 3-D structural and stratigraphic data for key horizons and maps of unit thickness, lithology, paleobathymetry, heat flow, original total organic carbon, and original Rock-Eval pyrolysis hydrogen index for each source rock. The four principal petroleum source rocks in the basin are the Miocene Antelope shale of Graham and Williams (1985; hereafter referred to as Antelope shale), the Eocene Kreyenhagen Formation, the Eocene Tumey formation of Atwill (1935; hereafter referred to as Tumey formation), and the Cretaceous to Paleocene Moreno Formation. Due to limited Rock-Eval/total organic carbon data, the Tumey formation was modeled using constant values of original total organic carbon and original hydrogen index. Maps of original total organic carbon and original hydrogen index were created for the other three source rocks. The Antelope shale was modeled using Type IIS kerogen kinetics, whereas Type II kinetics were used for the other source rocks. Four-dimensional modeling and geologic field evidence indicate that maximum burial of the three principal Cenozoic source rocks occurred in latest Pliocene to Holocene time. For example, a 1-D extraction of burial history from the 4-D model in the Tejon depocenter shows that the bottom of the Antelope shale source rock began expulsion (10 percent transformation ratio) about 4.6 Ma and reached peak expulsion (50 percent transformation ratio) about 3.6 Ma. Except on the west flank of the basin, where steep dips in outcrop and seismic data indicate substantial uplift, little or no section has been eroded. Most petroleum migration occurred during late Cenozoic time in distinct stratigraphic intervals along east-west pathways from pods of active petroleum source rock in the Tejon and Buttonwillow depocenters to updip sandstone reservoirs. Satisfactory runs of the model required about 18 hours of computation time for each simulation using parallel processing on a Linux-based cluster.
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
Estimation of Coriolis Force and Torque Acting on Ares-1
NASA Technical Reports Server (NTRS)
Mackey, Ryan M.; Kulikov, Igor K.; Smelyanskiy, Vadim; Luchinsky, Dmitry; Orr, Jeb
2011-01-01
A document describes work on the origin of Coriolis force and estimating Coriolis force and torque applied to the Ares-1 vehicle during its ascent, based on an internal ballistics model for a multi-segmented solid rocket booster (SRB).
Raman/LIBS Data Fusion via Two-Way Variational Autoencoders
NASA Astrophysics Data System (ADS)
Parente, M.; Gemp, I.
2018-04-01
We propose an original solution to extracting mineral abundances from Raman spectra by combining Raman data with LIBS using a novel deep learning model based on variational autoencoders and data fusion, which outperforms the current state of the art.
Life-Game, with Glass Beads and Molecules, on the Principles of the Origin of Life
ERIC Educational Resources Information Center
Eigen, Manfred; Haglund, Herman
1976-01-01
Discusses a theoretical model that uses a game as a base for studying processes of a stochastic nature, which involve chemical reactions, molecular systems, biological processes, cells, or people in a population. (MLH)
A New ’Availability-Payment’ Model for Pricing Performance-Based Logistics Contracts
2014-04-30
maintenance network connected to the inventory and Original Equipment Manufacturer (OEM) used in this paper. The input to the Petri net in Figure 2 is the...contract structures. The model developed in this paper uses an affine controller to drive a discrete event simulator ( Petri net ) that produces...discrete event simulator ( Petri net ) that produces availability and cost measures. The model is used to explore the optimum availability assessment
[Chronic insomnia: treatment methods based on the current "3P" model of insomnia].
Poluektov, M G; Pchelina, P V
2015-01-01
Authors consider one of the popular models of the pathogenesis of chronic insomnia--"3P" model. It explains the origin and course of insomnia on the basis of interaction of three factors: predisposing, precipitating and perpetuating. The role of each group of factors and its connection to the cerebral hyperarousal state is discussed. Different variants of cognitive-behavioral therapy and pharmacological treatment of chronic insomnia are described.
1988-06-01
became apparent. ESC originally planned to confect a dedicated model, i.e., one specifically designed to address Korea. However, it reconsidered the...s) and should not be construed as an official US Department of the Army position, policy, or decision unless so designated by other official...model based on object-oriented programming design techniques, and uses the process view of simulation to achieve its purpose. As a direct con
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
2013-01-01
We have previously demonstrated the unique migration behavior of Ge quantum dots (QDs) through Si3N4 layers during high-temperature oxidation. Penetration of these QDs into the underlying Si substrate however, leads to a completely different behavior: the Ge QDs ‘explode,’ regressing back almost to their origins as individual Ge nuclei as formed during the oxidation of the original nanopatterned SiGe structures used for their generation. A kinetics-based model is proposed to explain the anomalous migration behavior and morphology changes of the Ge QDs based on the Si flux generated during the oxidation of Si-containing layers. PMID:23618165
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
A Link between ORC-Origin Binding Mechanisms and Origin Activation Time Revealed in Budding Yeast
Hoggard, Timothy; Shor, Erika; Müller, Carolin A.; Nieduszynski, Conrad A.; Fox, Catherine A.
2013-01-01
Eukaryotic DNA replication origins are selected in G1-phase when the origin recognition complex (ORC) binds chromosomal positions and triggers molecular events culminating in the initiation of DNA replication (a.k.a. origin firing) during S-phase. Each chromosome uses multiple origins for its duplication, and each origin fires at a characteristic time during S-phase, creating a cell-type specific genome replication pattern relevant to differentiation and genome stability. It is unclear whether ORC-origin interactions are relevant to origin activation time. We applied a novel genome-wide strategy to classify origins in the model eukaryote Saccharomyces cerevisiae based on the types of molecular interactions used for ORC-origin binding. Specifically, origins were classified as DNA-dependent when the strength of ORC-origin binding in vivo could be explained by the affinity of ORC for origin DNA in vitro, and, conversely, as ‘chromatin-dependent’ when the ORC-DNA interaction in vitro was insufficient to explain the strength of ORC-origin binding in vivo. These two origin classes differed in terms of nucleosome architecture and dependence on origin-flanking sequences in plasmid replication assays, consistent with local features of chromatin promoting ORC binding at ‘chromatin-dependent’ origins. Finally, the ‘chromatin-dependent’ class was enriched for origins that fire early in S-phase, while the DNA-dependent class was enriched for later firing origins. Conversely, the latest firing origins showed a positive association with the ORC-origin DNA paradigm for normal levels of ORC binding, whereas the earliest firing origins did not. These data reveal a novel association between ORC-origin binding mechanisms and the regulation of origin activation time. PMID:24068963
TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases
1990-09-01
IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version
Mathematical model of solar radiation based on climatological data from NASA SSE
NASA Astrophysics Data System (ADS)
Obukhov, S. G.; Plotnikov, I. A.; Masolov, V. G.
2018-05-01
An original model of solar radiation arriving at the arbitrarily oriented surface has been developed. The peculiarity of the model is that it uses numerical values of the atmospheric transparency index and the surface albedo from the NASA SSE database as initial data. The model is developed in the MatLab/Simulink environment to predict the main characteristics of solar radiation for any geographical point in Russia, including those for territories with no regular actinometric observations.
Modeling of Relation between Transaction Network and Production Activity for Firms
NASA Astrophysics Data System (ADS)
Iino, T.; Iyetomi, H.
Bak et al. [Ricerche Economiche 47 (1993), 3] proposed a self-organizing model for production activity of interacting firms to illustrate how large fluctuations can be triggered by small independent shocks in aggregate economy. This paper develops the original transaction model based on a regular network with layered order flow to accommodate more realistic networks. Simulations in the generalized model so obtained are then carried out for various networks to examine the influence caused by change of the network structure.
Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut
2015-01-01
The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Turrini, D.; Svetsov, V.; Consolmagno, G.; Sirono, S.; Pirani, S.
2016-12-01
The survival of asteroid Vesta during the violent early history of the Solar System is a pivotal constraint on theories of planetary formation. Particularly important from this perspective is the amount of olivine excavated from the vestan mantle by impacts, as this constrains both the interior structure of Vesta and the number of major impacts the asteroid suffered during its life. The NASA Dawn mission revealed that olivine is present on Vesta's surface in limited quantities, concentrated in small patches at a handful of sites not associated with the two large impact basins Rheasilvia and Veneneia. The first detections were interpreted as the result of the excavation of endogenous olivine, even if the depth at which the detected olivine originated was a matter of debate. Later works raised instead the possibility that the olivine had an exogenous origin, based on the geologic and spectral features of the deposits. In this work, we quantitatively explore the proposed scenario of a exogenous origin for the detected vestan olivine to investigate whether its presence on Vesta can be explained as a natural outcome of the collisional history of the asteroid over the last one or more billion years. To perform this study we took advantage of the impact contamination model previously developed to study the origin and amount of dark and hydrated materials observed by Dawn on Vesta, a model we updated by performing dedicated hydrocode impact simulations. We show that the exogenous delivery of olivine by the same impacts that shaped the vestan surface can offer a viable explanation for the currently identified olivine-rich sites without violating the constraint posed by the lack of global olivine signatures on Vesta. Our results indicate that no mantle excavation is in principle required to explain the observations of the Dawn mission and support the idea that the vestan crust could be thicker than indicated by simple geochemical models based on the Howardite-Eucrite-Diogenite family of meteorites.
Constitutive modeling of glassy shape memory polymers
NASA Astrophysics Data System (ADS)
Khanolkar, Mahesh
The aim of this research is to develop constitutive models for non-linear materials. Here, issues related for developing constitutive model for glassy shape memory polymers are addressed in detail. Shape memory polymers are novel material that can be easily formed into complex shapes, retaining memory of their original shape even after undergoing large deformations. The temporary shape is stable and return to the original shape is triggered by a suitable mechanism such heating the polymer above a transition temperature. Glassy shape memory polymers are called glassy because the temporary shape is fixed by the formation of a glassy solid, while return to the original shape is due to the melting of this glassy phase. The constitutive model has been developed to capture the thermo-mechanical behavior of glassy shape memory polymers using elements of nonlinear mechanics and polymer physics. The key feature of this framework is that a body can exist stress free in numerous natural configurations, the underlying natural configuration of the body changing during the process, with the response of the body being elastic from these evolving natural configurations. The aim of this research is to formulate a constitutive model for glassy shape memory polymers (GSMP) which takes in to account the fact that the stress-strain response depends on thermal expansion of polymers. The model developed is for the original amorphous phase, the temporary glassy phase and transition between these phases. The glass transition process has been modeled using a framework that was developed recently for studying crystallization in polymers and is based on the theory of multiple natural configurations. Using the same frame work, the melting of the glassy phase to capture the return of the polymer to its original shape is also modeled. The effect of nanoreinforcement on the response of shape memory polymers (GSMP) is studied and a model is developed. In addition to modeling and solving boundary value problems for GSMP's, problems of importance for CSMP, specifically a shape memory cycle (Torsion of a Cylinder) is solved using the developed crystallizable shape memory polymer model. To solve complex boundary value problems in realistic geometries a user material subroutine (UMAT) for GSMP model has been developed for use in conjunction with the commercial finite element software ABAQUS. The accuracy of the UMAT has been verified by testing it against problems for which the results are known.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
NASA Astrophysics Data System (ADS)
Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.
2013-12-01
To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.
Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena
2007-11-05
A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.
Fabrication of custom-shaped grafts for cartilage regeneration.
Koo, Seungbum; Hargreaves, Brian A; Gold, Garry E; Dragoo, Jason L
2010-10-01
to create a custom-shaped graft through 3D tissue shape reconstruction and rapid-prototype molding methods using MRI data, and to test the accuracy of the custom-shaped graft against the original anatomical defect. An iatrogenic defect on the distal femur was identified with a 1.5 Tesla MRI and its shape was reconstructed into a three-dimensional (3D) computer model by processing the 3D MRI data. First, the accuracy of the MRI-derived 3D model was tested against a laser-scan based 3D model of the defect. A custom-shaped polyurethane graft was fabricated from the laser-scan based 3D model by creating custom molds through computer aided design and rapid-prototyping methods. The polyurethane tissue was laser-scanned again to calculate the accuracy of this process compared to the original defect. The volumes of the defect models from MRI and laser-scan were 537 mm3 and 405 mm3, respectively, implying that the MRI model was 33% larger than the laser-scan model. The average (±SD) distance deviation of the exterior surface of the MRI model from the laser-scan model was 0.4 ± 0.4 mm. The custom-shaped tissue created from the molds was qualitatively very similar to the original shape of the defect. The volume of the custom-shaped cartilage tissue was 463 mm3 which was 15% larger than the laser-scan model. The average (±SD) distance deviation between the two models was 0.04 ± 0.19 mm. This investigation proves the concept that custom-shaped engineered grafts can be fabricated from standard sequence 3-D MRI data with the use of CAD and rapid-prototyping technology. The accuracy of this technology may help solve the interfacial problem between native cartilage and graft, if the grafts are custom made for the specific defect. The major source of error in fabricating a 3D custom-shaped cartilage graft appears to be the accuracy of a MRI data itself; however, the precision of the model is expected to increase by the utilization of advanced MR sequences with higher magnet strengths.
NASA Astrophysics Data System (ADS)
Yuniarto, Budi; Kurniawan, Robert
2017-03-01
PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The U.S. Department of Energy (DOE) Grand Junction Projects Office (GJPO) and its contractor, Rust Geotech, support the Kirtland Area Office by assisting Sandia National Laboratories/New Mexico (Sandia/NM) with remedial action, remedial design, and technical support of its Environmental Restoration Program. To aid in determining groundwater origins and flow paths, the GJPO was tasked to provide interpretation of groundwater geochemical data. The purpose of this investigation was to describe and analyze the groundwater geochemistry of the Sandia/NM Kirtland Air Force Base (KAFB). Interpretations of groundwater origins are made by using these data and the results of {open_quotes}mass balance{close_quotes} and {open_quotes}reactionmore » path{close_quote} modeling. Additional maps and plots were compiled to more fully comprehend the geochemical distributions. A more complete set of these data representations are provided in the appendices. Previous interpretations of groundwater-flow paths that were based on well-head, geologic, and geochemical data are presented in various reports and were used as the basis for developing the models presented in this investigation.« less
Uncovering urban human mobility from large scale taxi GPS data
NASA Astrophysics Data System (ADS)
Tang, Jinjun; Liu, Fang; Wang, Yinhai; Wang, Hua
2015-11-01
Taxi GPS trajectories data contain massive spatial and temporal information of urban human activity and mobility. Taking taxi as mobile sensors, the information derived from taxi trips benefits the city and transportation planning. The original data used in study are collected from more than 1100 taxi drivers in Harbin city. We firstly divide the city area into 400 different transportation districts and analyze the origin and destination distribution in urban area on weekday and weekend. The Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to cluster pick-up and drop-off locations. Furthermore, four spatial interaction models are calibrated and compared based on trajectories in shopping center of Harbin city to study the pick-up location searching behavior. By extracting taxi trips from GPS data, travel distance, time and average speed in occupied and non-occupied status are then used to investigate human mobility. Finally, we use observed OD matrix of center area in Harbin city to model the traffic distribution patterns based on entropy-maximizing method, and the estimation performance verify its effectiveness in case study.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Characterisation of the n-colour printing process using the spot colour overprint model.
Deshpande, Kiran; Green, Phil; Pointer, Michael R
2014-12-29
This paper is aimed at reproducing the solid spot colours using the n-colour separation. A simplified numerical method, called as the spot colour overprint (SCOP) model, was used for characterising the n-colour printing process. This model was originally developed for estimating the spot colour overprints. It was extended to be used as a generic forward characterisation model for the n-colour printing process. The inverse printer model based on the look-up table was implemented to obtain the colour separation for n-colour printing process. Finally the real-world spot colours were reproduced using 7-colour separation on lithographic offset printing process. The colours printed with 7 inks were compared against the original spot colours to evaluate the accuracy. The results show good accuracy with the mean CIEDE2000 value between the target colours and the printed colours of 2.06. The proposed method can be used successfully to reproduce the spot colours, which can potentially save significant time and cost in the printing and packaging industry.
[Lossless ECG compression algorithm with anti- electromagnetic interference].
Guan, Shu-An
2005-03-01
Based on the study of ECG signal features, a new lossless ECG compression algorithm is put forward here. We apply second-order difference operation with anti- electromagnetic interference to original ECG signals and then, compress the result by the escape-based coding model. In spite of serious 50Hz-interference, the algorithm is still capable of obtaining a high compression ratio.
ERIC Educational Resources Information Center
Anderson, Nancy; And Others
This is one of a set of five handbooks compiled by the Northwest Regional Educational Laboratory that describes the processes for planning and operating a total experience-based career education (EBCE) program. Processes and material are those developed by the original EBCE model--Community Experience in Career Education (CE)2. The area of…
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
Cognitive control predicts use of model-based reinforcement learning.
Otto, A Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D
2015-02-01
Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information--in the service of overcoming habitual, stimulus-driven responses--in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior.
Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo
2008-12-01
Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.
Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría
2018-04-01
Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.
A comparison of arcjet plume properties to model predictions
NASA Technical Reports Server (NTRS)
Cappelli, M. A.; Liebeskind, J. G.; Hanson, R. K.; Butler, G. W.; King, D. Q.
1993-01-01
This paper describes an experimental study of the plasma plume properties of a 1 kW class hydrogen arcjet thruster and the comparison of measured temperature and velocity field to model predictions. The experiments are based on laser-induced fluorescence excitation of the Balmer-alpha transition. The model is based on a single-fluid magnetohydrodynamic description of the flow originally developed to predict arcjet thruster performance. Excellent agreement between model predictions and experimental velocity is found, despite the complex nature of the flow. Measured and predicted exit plane temperatures are in disagreement by as much as 2000K over a range of operating conditions. The possible sources for this discrepancy are discussed.
Users guide: The LaRC human-operator-simulator-based pilot model
NASA Technical Reports Server (NTRS)
Bogart, E. H.; Waller, M. C.
1985-01-01
A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
NASA Astrophysics Data System (ADS)
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling.
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-07
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-01-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
An approach to the origin of self-replicating system. I - Intermolecular interactions
NASA Technical Reports Server (NTRS)
Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.
1978-01-01
The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
NASA Astrophysics Data System (ADS)
Zbiciak, M.; Grabowik, C.; Janik, W.
2015-11-01
Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
Tweets and Facebook Posts, the Novelty Techniques in the Creation of Origin-Destination Models
NASA Astrophysics Data System (ADS)
Malema, H. K.; Musakwa, W.
2016-06-01
Social media and big data have emerged to be a useful source of information that can be used for planning purposes, particularly transportation planning and trip-distribution studies. Cities in developing countries such as South Africa often struggle with out-dated, unreliable and cumbersome techniques such as traffic counts and household surveys to conduct origin and destination studies. The emergence of ubiquitous crowd sourced data, big data, social media and geolocation based services has shown huge potential in providing useful information for origin and destination studies. Perhaps such information can be utilised to determine the origin and destination of commuters using the Gautrain, a high-speed railway in Gauteng province South Africa. To date little is known about the origins and destinations of Gautrain commuters. Accordingly, this study assesses the viability of using geolocation-based services namely Facebook and Twitter in mapping out the network movements of Gautrain commuters. Explorative Spatial Data Analysis (ESDA), Echo-social and ArcGis software were used to extract social media data, i.e. tweets and Facebook posts as well as to visualize the concentration of Gautrain commuters. The results demonstrate that big data and geolocation based services have the significant potential to predict movement network patterns of commuters and this information can thus, be used to inform and improve transportation planning. Nevertheless use of crowd sourced data and big data has privacy concerns that still need to be addressed.
Gray matter correlates of creative potential: A latent variable voxel-based morphometry study
Jauk, Emanuel; Neubauer, Aljoscha C.; Dunst, Beate; Fink, Andreas; Benedek, Mathias
2015-01-01
There is increasing research interest in the structural and functional brain correlates underlying creative potential. Recent investigations found that interindividual differences in creative potential relate to volumetric differences in brain regions belonging to the default mode network, such as the precuneus. Yet, the complex interplay between creative potential, intelligence, and personality traits and their respective neural bases is still under debate. We investigated regional gray matter volume (rGMV) differences that can be associated with creative potential in a heterogeneous sample of N = 135 individuals using voxel-based morphometry (VBM). By means of latent variable modeling and consideration of recent psychometric advancements in creativity research, we sought to disentangle the effects of ideational originality and fluency as two independent indicators of creative potential. Intelligence and openness to experience were considered as common covariates of creative potential. The results confirmed and extended previous research: rGMV in the precuneus was associated with ideational originality, but not with ideational fluency. In addition, we found ideational originality to be correlated with rGMV in the caudate nucleus. The results indicate that the ability to produce original ideas is tied to default-mode as well as dopaminergic structures. These structural brain correlates of ideational originality were apparent throughout the whole range of intellectual ability and thus not moderated by intelligence. In contrast, structural correlates of ideational fluency, a quantitative marker of creative potential, were observed only in lower intelligent individuals in the cuneus/lingual gyrus. PMID:25676914
Effects of urban microcellular environments on ray-tracing-based coverage predictions.
Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing
2016-09-01
The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.
Maximum entropy principal for transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilich, F.; Da Silva, R.
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less
NASA Astrophysics Data System (ADS)
Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.
2014-09-01
Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano
2017-09-01
This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.
Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.
Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea
2017-08-18
Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.
ERIC Educational Resources Information Center
Georgia Univ., Athens. Coll. of Family and Consumer Sciences.
This outreach project is based on the validated Developmental Therapy-Developmental Teaching model originally designed for young children with severe emotional/behavioral problems and their families. It is an approach that emphasizes the teaching skills that foster a child's social-emotional-behavioral competence. The model has proven effective in…
Examination of the wind speed limit function in the Rothermel surface fire spread model
Patricia L. Andrews; Miguel G. Cruz; Richard C. Rothermel
2013-01-01
The Rothermel surface fire spread model includes a wind speed limit, above which predicted rate of spread is constant. Complete derivation of the wind limit as a function of reaction intensity is given, along with an alternate result based on a changed assumption. Evidence indicates that both the original and the revised wind limits are too restrictive. Wind limit is...
Code of Federal Regulations, 2010 CFR
2010-10-01
... originally manufactured for importation into and sale in the United States and of the same model year as the model for which petition is made, and is capable of being readily modified to conform to all applicable... standards, shall pay a fee based upon the direct and indirect costs of processing and acting upon such...
Translations From Kommunist, Number 7, May 1978
1978-07-12
greater attention to this problem. In recent years the production of several tens of good engine models has been organized. They include, for...internal contra- dictions as the models of his bourgeois opponents. The radical differences between Marxism and neo- Ricardianism are manifested... production relations under capitalism which conceal its nature, and the "secret" of capitalist exploitation. The origin of surplus value based on
The numerical modelling of MHD astrophysical flows with chemistry
NASA Astrophysics Data System (ADS)
Kulikov, I.; Chernykh, I.; Protasov, V.
2017-10-01
The new code for numerical simulation of magnetic hydrodynamical astrophysical flows with consideration of chemical reactions is given in the paper. At the heart of the code - the new original low-dissipation numerical method based on a combination of operator splitting approach and piecewise-parabolic method on the local stencil. The chemodynamics of the hydrogen while the turbulent formation of molecular clouds is modeled.
Bartolucci, Chiara; Lombardo, Giovanni Pietro
2017-01-01
This article examines research on hypnosis and suggestion, starting with the nineteenth-century model proposed by Enrico Morselli (1852-1929), an illustrious Italian psychiatrist and psychologist. The authors conducted an original psychophysiological analysis of hypnosis, distancing the work from the neuropathological concept of the time and proposing a model based on a naturalistic approach to investigating mental processes. The issues investigated by Morselli, including the definition of hypnosis and analysis of specific mental processes such as attention and memory, are reviewed in light of modern research. From the view of modern neuroscientific concepts, some problems that originated in the nineteenth century still appear to be present and pose still-open questions.
Evaluation of Existing Aircraft Operator Data Bases
1990-08-01
the Falcon F20, the CASA Model 212, the Lockheed LI011, and the Convair- Allison Model 580. A summary of the results of the inventory for each of the...was obtained from Allison , the engine manufacturer who did the conversions, and retained their original Model 340 and 440 serial numbers. Lundkvist... Berks RG13 4FJ England Phone: (06) 356-6929 Failure Analysis Associates 149 Commonwealth Drive Menlow Park, CA 94025 Phone: (415) 326-9400 Fax: (415
Fabian Uzoh; William W. Oliver
2006-01-01
A height increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in western United States. The data set used in this study came from long-term permanent research plots in even-aged, pure stands both planted and of natural origin. The data base consists of six levels-of-growing stock studies supplemented by initial...
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Determination of origin and intended use of plutonium metal using nuclear forensic techniques.
Rim, Jung H; Kuhn, Kevin J; Tandon, Lav; Xu, Ning; Porterfield, Donivan R; Worley, Christopher G; Thomas, Mariam R; Spencer, Khalil J; Stanley, Floyd E; Lujan, Elmer J; Garduno, Katherine; Trellue, Holly R
2017-04-01
Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials' properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240 Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modeling feedback and trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. Based on this investigation, the most likely intended use for these plutonium foils was 239 Pu fission foil targets for physics experiments, such as cross-section measurements, etc. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of origin and intended use of plutonium metal using nuclear forensic techniques
Rim, Jung H.; Kuhn, Kevin J.; Tandon, Lav; ...
2017-04-01
Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials’ properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modelling feedback andmore » trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. In conclusion, based on this investigation, the most likely intended use for these plutonium foils was 239Pu fission foil targets for physics experiments, such as cross-section measurements, etc.« less
NASA Astrophysics Data System (ADS)
Glatzmaier, G. A.
2010-12-01
There has been considerable interest during the past few years about the banded zonal winds and global magnetic field on Saturn (and Jupiter). Questions regarding the depth to which the intense winds extend below the surface and the role they play in maintaining the dynamo continue to be debated. The types of computer models employed to address these questions fall into two main classes: general circulation models (GCMs) based on hydrostatic shallow-water assumptions from the atmospheric and ocean modeling communities and global non-hydrostatic deep convection models from the geodynamo and solar dynamo communities. The latter class can be further divided into Boussinesq models, which do not account for density stratification, and anelastic models, which do. Recent efforts to convert GCMs to deep circulation anelastic models have succeeded in producing fluid flows similar to those obtained from the original deep convection anelastic models. We describe results from one of the original anelastic convective dynamo simulations and compare them to a recent anelastic dynamo benchmark for giant gas planets. This benchmark is based on a polytropic reference state that spans five density scale heights with a radius and rotation rate similar to those of our solar system gas giants. The resulting magnetic Reynolds number is about 3000. Better spatial resolution will be required to produce more realistic predictions that capture the effects of both the density and electrical conductivity stratifications and include enough of the turbulent kinetic energy spectrum. Important additional physics may also be needed in the models. However, the basic models used in all simulation studies of the global dynamics of giant planets will hopefully first be validated by doing these simpler benchmarks.
What drives patient mobility across Italian regions? Evidence from hospital discharge data.
Balia, Silvia; Brau, Rinaldo; Marrocu, Emanuela
2014-01-01
This chapter examines patient mobility across Italian regions using data on hospital discharges that occurred in 2008. The econometric analysis is based on Origin-Destination (OD) flow data. Since patient mobility is a crucial phenomenon in contexts of hospital competition based on quality and driven by patient choice, as is the case in Italy, it is crucial to understand its determinants. What makes the Italian case more interesting is the decentralization of the National Health Service that yields large regional variation in patient flows in favor of Centre-Northern regions, which typically are 'net exporters' of hospital treatments. We present results from gravity models estimated using count data estimators, for total and specific types of flows (ordinary admissions, surgical DRGs and medical DRGs). We model cross-section dependence by specifically including features other than geographical distance for OD pairs, such as past migration flows and the share of surgical DRGs. Most of the explanatory variables exhibit the expected effect, with distance and GDP per capita at origin showing a negative impact on patient outflows. Past migrations and indicators of performance at destination are effective determinants of patient mobility. Moreover, we find evidence of regional externalities due to spatial proximity effects at both origin and destination.
Surrogate modeling of deformable joint contact using artificial neural networks.
Eskinazi, Ilan; Fregly, Benjamin J
2015-09-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks
Eskinazi, Ilan; Fregly, Benjamin J.
2016-01-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591
NASA Astrophysics Data System (ADS)
Wang, Huan-huan; Wang, Jian; Liu, Feng; Cao, Hai-juan; Wang, Xiang-jun
2014-12-01
A test environment is established to obtain experimental data for verifying the positioning model which was derived previously based on the pinhole imaging model and the theory of binocular stereo vision measurement. The model requires that the optical axes of the two cameras meet at one point which is defined as the origin of the world coordinate system, thus simplifying and optimizing the positioning model. The experimental data are processed and tables and charts are given for comparing the positions of objects measured with DGPS with a measurement accuracy of 10 centimeters as the reference and those measured with the positioning model. Sources of visual measurement model are analyzed, and the effects of the errors of camera and system parameters on the accuracy of positioning model were probed, based on the error transfer and synthesis rules. A conclusion is made that measurement accuracy of surface surveillances based on binocular stereo vision measurement is better than surface movement radars, ADS-B (Automatic Dependent Surveillance-Broadcast) and MLAT (Multilateration).
Li, Zhong; Liu, Ming-de; Ji, Shou-xiang
2016-03-01
The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.
Archaeal “Dark Matter” and the Origin of Eukaryotes
Williams, Tom A.; Embley, T. Martin
2014-01-01
Current hypotheses about the history of cellular life are mainly based on analyses of cultivated organisms, but these represent only a small fraction of extant biodiversity. The sequencing of new environmental lineages therefore provides an opportunity to test, revise, or reject existing ideas about the tree of life and the origin of eukaryotes. According to the textbook three domains hypothesis, the eukaryotes emerge as the sister group to a monophyletic Archaea. However, recent analyses incorporating better phylogenetic models and an improved sampling of the archaeal domain have generally supported the competing eocyte hypothesis, in which core genes of eukaryotic cells originated from within the Archaea, with important implications for eukaryogenesis. Given this trend, it was surprising that a recent analysis incorporating new genomes from uncultivated Archaea recovered a strongly supported three domains tree. Here, we show that this result was due in part to the use of a poorly fitting phylogenetic model and also to the inclusion by an automated pipeline of genes of putative bacterial origin rather than nucleocytosolic versions for some of the eukaryotes analyzed. When these issues were resolved, analyses including the new archaeal lineages placed core eukaryotic genes within the Archaea. These results are consistent with a number of recent studies in which improved archaeal sampling and better phylogenetic models agree in supporting the eocyte tree over the three domains hypothesis. PMID:24532674
Archaeal "dark matter" and the origin of eukaryotes.
Williams, Tom A; Embley, T Martin
2014-03-01
Current hypotheses about the history of cellular life are mainly based on analyses of cultivated organisms, but these represent only a small fraction of extant biodiversity. The sequencing of new environmental lineages therefore provides an opportunity to test, revise, or reject existing ideas about the tree of life and the origin of eukaryotes. According to the textbook three domains hypothesis, the eukaryotes emerge as the sister group to a monophyletic Archaea. However, recent analyses incorporating better phylogenetic models and an improved sampling of the archaeal domain have generally supported the competing eocyte hypothesis, in which core genes of eukaryotic cells originated from within the Archaea, with important implications for eukaryogenesis. Given this trend, it was surprising that a recent analysis incorporating new genomes from uncultivated Archaea recovered a strongly supported three domains tree. Here, we show that this result was due in part to the use of a poorly fitting phylogenetic model and also to the inclusion by an automated pipeline of genes of putative bacterial origin rather than nucleocytosolic versions for some of the eukaryotes analyzed. When these issues were resolved, analyses including the new archaeal lineages placed core eukaryotic genes within the Archaea. These results are consistent with a number of recent studies in which improved archaeal sampling and better phylogenetic models agree in supporting the eocyte tree over the three domains hypothesis.
Review and evaluation of models that produce trip tables from ground counts : interim report.
DOT National Transportation Integrated Search
1996-01-01
This research effort was motivated by the desires of planning agencies to seek alternative methods of deriving current or base year Origin-Destination (O-D) trip tables without adopting conventional O-D surveys that are expensive, time consuming and ...
A Single-Display Groupware Collaborative Language Laboratory
ERIC Educational Resources Information Center
Calderón, Juan Felipe; Nussbaum, Miguel; Carmach, Ignacio; Díaz, Juan Jaime; Villalta, Marco
2016-01-01
Language learning tools have evolved to take into consideration new teaching models of collaboration and communication. While second language acquisition tasks have been taken online, the traditional language laboratory has remained unchanged. By continuing to follow its original configuration based on individual work, the language laboratory…
Kolostova, Katarina; Zhang, Yong; Hoffman, Robert M; Bobek, Vladimir
2014-09-01
In the present study, we demonstrate an animal model and recently introduced size-based exclusion method for circulating tumor cells (CTCs) isolation. The methodology enables subsequent in vitro CTC-culture and characterization. Human lung cancer cell line H460, expressing red fluorescent protein (H460-RFP), was orthotopically implanted in nude mice. CTCs were isolated by a size-based filtration method and successfully cultured in vitro on the separating membrane (MetaCell®), analyzed by means of time-lapse imaging. The cultured CTCs were heterogeneous in size and morphology even though they originated from a single tumor. The outer CTC-membranes were blebbing in general. Abnormal mitosis resulting in three daughter cells was frequently observed. The expression of RFP ensured that the CTCs originated from lung tumor. These readily isolatable, identifiable and cultivable CTCs can be used to characterize individual patient cancers and for screening of more effective treatment.
Local nature of impurity induced spin-orbit torques
NASA Astrophysics Data System (ADS)
Nikolaev, Sergey; Kalitsov, Alan; Chshiev, Mairbec; Mryasov, Oleg
Spin-orbit torques are of a great interest due to their potential applications for spin electronics. Generally, it originates from strong spin orbit coupling of heavy 4d/5d elements and its mechanism is usually attributed either to the Spin Hall effect or Rashba spin-orbit coupling. We have developed a quantum-mechanical approach based on the non-equilibrium Green's function formalism and tight binding Hamiltonian model to study spin-orbit torques and extended our theory for the case of extrinsic spin-orbit coupling induced by impurities. For the sake of simplicity, we consider a magnetic material on a two dimensional lattice with a single non-magnetic impurity. However, our model can be easily extended for three dimensional layered heterostructures. Based on our calculations, we present the detailed analysis of the origin of local spin-orbit torques and persistent charge currents around the impurity, that give rise to spin-orbit torques even in equilibrium and explain the existence of anisotropy.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
An improved design method of a tuned mass damper for an in-service footbridge
NASA Astrophysics Data System (ADS)
Shi, Weixing; Wang, Liangkun; Lu, Zheng
2018-03-01
Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.
Singh, Jay; Chattterjee, Kalyan; Vishwakarma, C B
2018-01-01
Load frequency controller has been designed for reduced order model of single area and two-area reheat hydro-thermal power system through internal model control - proportional integral derivative (IMC-PID) control techniques. The controller design method is based on two degree of freedom (2DOF) internal model control which combines with model order reduction technique. Here, in spite of taking full order system model a reduced order model has been considered for 2DOF-IMC-PID design and the designed controller is directly applied to full order system model. The Logarithmic based model order reduction technique is proposed to reduce the single and two-area high order power systems for the application of controller design.The proposed IMC-PID design of reduced order model achieves good dynamic response and robustness against load disturbance with the original high order system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Assessment of State-of-the-Art Dust Emission Scheme in GEOS
NASA Technical Reports Server (NTRS)
Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine
2017-01-01
The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Zhou, Fei; Zhao, Yajing; Peng, Jiyu; Jiang, Yirong; Li, Maiquan; Jiang, Yuan; Lu, Baiyi
2017-07-01
Osmanthus fragrans flowers are used as folk medicine and additives for teas, beverages and foods. The metabolites of O. fragrans flowers from different geographical origins were inconsistent in some extent. Chromatography and mass spectrometry combined with multivariable analysis methods provides an approach for discriminating the origin of O. fragrans flowers. To discriminate the Osmanthus fragrans var. thunbergii flowers from different origins with the identified metabolites. GC-MS and UPLC-PDA were conducted to analyse the metabolites in O. fragrans var. thunbergii flowers (in total 150 samples). Principal component analysis (PCA), soft independent modelling of class analogy analysis (SIMCA) and random forest (RF) analysis were applied to group the GC-MS and UPLC-PDA data. GC-MS identified 32 compounds common to all samples while UPLC-PDA/QTOF-MS identified 16 common compounds. PCA of the UPLC-PDA data generated a better clustering than PCA of the GC-MS data. Ten metabolites (six from GC-MS and four from UPLC-PDA) were selected as effective compounds for discrimination by PCA loadings. SIMCA and RF analysis were used to build classification models, and the RF model, based on the four effective compounds (caffeic acid derivative, acteoside, ligustroside and compound 15), yielded better results with the classification rate of 100% in the calibration set and 97.8% in the prediction set. GC-MS and UPLC-PDA combined with multivariable analysis methods can discriminate the origin of Osmanthus fragrans var. thunbergii flowers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A Protocol for Epigenetic Imprinting Analysis with RNA-Seq Data.
Zou, Jinfeng; Xiang, Daoquan; Datla, Raju; Wang, Edwin
2018-01-01
Genomic imprinting is an epigenetic regulatory mechanism that operates through expression of certain genes from maternal or paternal in a parent-of-origin-specific manner. Imprinted genes have been identified in diverse biological systems that are implicated in some human diseases and in embryonic and seed developmental programs in plants. The molecular underpinning programs and mechanisms involved in imprinting are yet to be explored in depth in plants. The recent advances in RNA-Seq-based methods and technologies offer an opportunity to systematically analyze epigenetic imprinting that operates at the whole genome level in the model and crop plants. We are interested using Arabidopsis model system, to investigate gene expression patterns associated with parent of origin and their implications to imprinting during embryo and seed development. Toward this, we have generated early embryo development RNA-Seq-based transcriptome datasets in F1s from a genetic cross between two diverse Arabidopsis thaliana ecotypes Col-0 and Tsu-1. With the data, we developed a protocol for evaluating the maternal and paternal contributions of genes during the early stages of embryo development after fertilization. This protocol is also designed to consider the contamination from other potential seed tissues, sequencing quality, proper processing of sequenced reads and variant calling, and appropriate inference of the parental contributions based on the parent-of-origin-specific single-nucleotide polymorphisms within the expressed genes. The approach, methods and the protocol developed in this study can be used for evaluating the effects of epigenetic imprinting in plants.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Ball, William P.
2017-04-01
Regression-based approaches are often employed to estimate riverine constituent concentrations and fluxes based on typically sparse concentration observations. One such approach is the recently developed WRTDS ("Weighted Regressions on Time, Discharge, and Season") method, which has been shown to provide more accurate estimates than prior approaches in a wide range of applications. Centered on WRTDS, this work was aimed at developing improved models for constituent concentration and flux estimation by accounting for antecedent discharge conditions. Twelve modified models were developed and tested, each of which contains one additional flow variable to represent antecedent conditions and which can be directly derived from the daily discharge record. High-resolution (∼daily) data at nine diverse monitoring sites were used to evaluate the relative merits of the models for estimation of six constituents - chloride (Cl), nitrate-plus-nitrite (NOx), total Kjeldahl nitrogen (TKN), total phosphorus (TP), soluble reactive phosphorus (SRP), and suspended sediment (SS). For each site-constituent combination, 30 concentration subsets were generated from the original data through Monte Carlo subsampling and then used to evaluate model performance. For the subsampling, three sampling strategies were adopted: (A) 1 random sample each month (12/year), (B) 12 random monthly samples plus additional 8 random samples per year (20/year), and (C) flow-stratified sampling with 12 regular (non-storm) and 8 storm samples per year (20/year). Results reveal that estimation performance varies with both model choice and sampling strategy. In terms of model choice, the modified models show general improvement over the original model under all three sampling strategies. Major improvements were achieved for NOx by the long-term flow-anomaly model and for Cl by the ADF (average discounted flow) model and the short-term flow-anomaly model. Moderate improvements were achieved for SS, TP, and TKN by the ADF model. By contrast, no such achievement was achieved for SRP by any proposed model. In terms of sampling strategy, performance of all models (including the original) was generally best using strategy C and worst using strategy A, and especially so for SS, TP, and SRP, confirming the value of routinely collecting stormflow samples. Overall, this work provides a comprehensive set of statistical evidence for supporting the incorporation of antecedent discharge conditions into the WRTDS model for estimation of constituent concentration and flux, thereby combining the advantages of two recent developments in water quality modeling.
NASA Astrophysics Data System (ADS)
O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.
2018-02-01
Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5 ⩽ N ⩽ 9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5 ± 4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while achieving mean model residual within 0.5 mm.
Guo, Jing; Yue, Tianli; Yuan, Yahong; Wang, Yutang
2013-07-17
To characterize and classify apple juices according to apple variety and geographical origin on the basis of their polyphenol composition, the polyphenolic profiles of 58 apple juice samples belonging to 5 apple varieties and from 6 regions in Shaanxi province of China were assessed. Fifty-one of the samples were from protected designation of origin (PDO) districts. Polyphenols were determined by high-performance liquid chromatography coupled to photodiode array detection (HPLC-PDA) and to a Q Exactive quadrupole-Orbitrap mass spectrometer. Chemometric techniques including principal component analysis (PCA) and stepwise linear discriminant analysis (SLDA) were carried out on polyphenolic profiles of the samples to develop discrimination models. SLDA achieved satisfactory discriminations of apple juices according to variety and geographical origin, providing respectively 98.3 and 91.2% success rate in terms of prediction ability. This result demonstrated that polyphenols could served as characteristic indices to verify the variety and geographical origin of apple juices.
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Agent-based modeling and systems dynamics model reproduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Macal, C. M.
2009-01-01
Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.
NASA Astrophysics Data System (ADS)
Nomoto, Ken&'Ichi; Tolstov, Alexey; Sorokina, Elena; Blinnikov, Sergei; Bersten, Melina; Suzuki, Tomoharu
2017-11-01
The physical origin of Type-I (hydrogen-less) superluminous supernovae (SLSNe-I), whose luminosities are 10 to 500 times higher than normal core-collapse supernovae, remains still unknown. Thanks to their brightness, SLSNe-I would be useful probes of distant Universe. For the power source of the light curves of SLSNe-I, radioactive-decays, magnetars, and circumstellar interactions have been proposed, although no definitive conclusions have been reached yet. Since most of light curve studies have been based on simplified semi-analytic models, we have constructed multi-color light curve models by means of detailed radiation hydrodynamical calculations for various mass of stars including very massive ones and large amount of mass loss. We compare the rising time, peak luminosity, width, and decline rate of the model light curves with observations of SLSNe-I and obtain constraints on their progenitors and explosion mechanisms. We particularly pay attention to the recently reported double peaks of the light curves. We discuss how to discriminate three models, relevant models parameters, their evolutionary origins, and implications for the early evolution of the Universe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, Mi-Xiang; Dai, Zi-Gao; Wu, Xue-Feng, E-mail: dzg@nju.edu.cn
2016-08-01
The X-ray afterglows of almost one-half of gamma-ray bursts have been discovered by the Swift satellite to have a shallow decay phase of which the origin remains mysterious. Two main models have been proposed to explain this phase: relativistic wind bubbles (RWBs) and structured ejecta, which could originate from millisecond magnetars and rapidly rotating black holes, respectively. Based on these models, we investigate polarization evolution in the shallow decay phase of X-ray and optical afterglows. We find that in the RWB model, a significant bump of the polarization degree evolution curve appears during the shallow decay phase of both opticalmore » and X-ray afterglows, while the polarization position angle abruptly changes its direction by 90°. In the structured ejecta model, however, the polarization degree does not evolve significantly during the shallow decay phase of afterglows whether the magnetic field configuration in the ejecta is random or globally large-scale. Therefore, we conclude that these two models for the shallow decay phase and relevant central engines would be testable with future polarization observations.« less
NASA Technical Reports Server (NTRS)
Mizukami, M.; Saunders, J. D.
1995-01-01
The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.
Inertial Movements of the Iris as the Origin of Postsaccadic Oscillations.
Bouzat, S; Freije, M L; Frapiccini, A L; Gasaneo, G
2018-04-27
Recent studies on the human eye indicate that the pupil moves inside the eyeball due to deformations of the iris. Here we show that this phenomenon can be originated by inertial forces undergone by the iris during the rotation of the eyeball. Moreover, these forces affect the iris in such a way that the pupil behaves effectively as a massive particle. To show this, we develop a model based on the Newton equation on the noninertial reference frame of the eyeball. The model allows us to reproduce and interpret several important findings of recent eye-tracking experiments on saccadic movements. In particular, we get correct results for the dependence of the amplitude and period of the postsaccadic oscillations on the saccade size and also for the peak velocity. The model developed may serve as a tool for characterizing eye properties of individuals.
Inertial Movements of the Iris as the Origin of Postsaccadic Oscillations
NASA Astrophysics Data System (ADS)
Bouzat, S.; Freije, M. L.; Frapiccini, A. L.; Gasaneo, G.
2018-04-01
Recent studies on the human eye indicate that the pupil moves inside the eyeball due to deformations of the iris. Here we show that this phenomenon can be originated by inertial forces undergone by the iris during the rotation of the eyeball. Moreover, these forces affect the iris in such a way that the pupil behaves effectively as a massive particle. To show this, we develop a model based on the Newton equation on the noninertial reference frame of the eyeball. The model allows us to reproduce and interpret several important findings of recent eye-tracking experiments on saccadic movements. In particular, we get correct results for the dependence of the amplitude and period of the postsaccadic oscillations on the saccade size and also for the peak velocity. The model developed may serve as a tool for characterizing eye properties of individuals.
Jiang, Weiqin; Shen, Yifei; Ding, Yongfeng; Ye, Chuyu; Zheng, Yi; Zhao, Peng; Liu, Lulu; Tong, Zhou; Zhou, Linfu; Sun, Shuo; Zhang, Xingchen; Teng, Lisong; Timko, Michael P; Fan, Longjiang; Fang, Weijia
2018-01-15
Synchronous multifocal tumors are common in the hepatobiliary and pancreatic system but because of similarities in their histological features, oncologists have difficulty in identifying their precise tissue clonal origin through routine histopathological methods. To address this problem and assist in more precise diagnosis, we developed a computational approach for tissue origin diagnosis based on naive Bayes algorithm (TOD-Bayes) using ubiquitous RNA-Seq data. Massive tissue-specific RNA-Seq data sets were first obtained from The Cancer Genome Atlas (TCGA) and ∼1,000 feature genes were used to train and validate the TOD-Bayes algorithm. The accuracy of the model was >95% based on tenfold cross validation by the data from TCGA. A total of 18 clinical cancer samples (including six negative controls) with definitive tissue origin were subsequently used for external validation and 17 of the 18 samples were classified correctly in our study (94.4%). Furthermore, we included as cases studies seven tumor samples, taken from two individuals who suffered from synchronous multifocal tumors across tissues, where the efforts to make a definitive primary cancer diagnosis by traditional diagnostic methods had failed. Using our TOD-Bayes analysis, the two clinical test cases were successfully diagnosed as pancreatic cancer (PC) and cholangiocarcinoma (CC), respectively, in agreement with their clinical outcomes. Based on our findings, we believe that the TOD-Bayes algorithm is a powerful novel methodology to accurately identify the tissue origin of synchronous multifocal tumors of unknown primary cancers using RNA-Seq data and an important step toward more precision-based medicine in cancer diagnosis and treatment. © 2017 UICC.
Ontology-based topic clustering for online discussion data
NASA Astrophysics Data System (ADS)
Wang, Yongheng; Cao, Kening; Zhang, Xiaoming
2013-03-01
With the rapid development of online communities, mining and extracting quality knowledge from online discussions becomes very important for the industrial and marketing sector, as well as for e-commerce applications and government. Most of the existing techniques model a discussion as a social network of users represented by a user-based graph without considering the content of the discussion. In this paper we propose a new multilayered mode to analysis online discussions. The user-based and message-based representation is combined in this model. A novel frequent concept sets based clustering method is used to cluster the original online discussion network into topic space. Domain ontology is used to improve the clustering accuracy. Parallel methods are also used to make the algorithms scalable to very large data sets. Our experimental study shows that the model and algorithms are effective when analyzing large scale online discussion data.
A new model for fluid velocity slip on a solid surface.
Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong
2016-10-12
A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.
Ashida, Sato; Wilkinson, Anna V.; Koehly, Laura M.
2011-01-01
Purpose To evaluate whether influence from social network members is associated with motivation to change dietary and physical activity behaviors. Design Baseline assessment followed by mailing of family health history-based personalized messages (2 weeks) and follow-up assessment (3 months). Setting Families from an ongoing population-based cohort in Houston, TX. Subjects 475 adults from 161 Mexican origin families. Out of 347 households contacted, 162 (47%) participated. Measures Family health history, social networks, and motivation to change behaviors. Analysis Two-level logistic regression modeling. Results Having at least one network member who encourages one to eat more fruits and vegetables (p=.010) and to engage in regular physical activity (p=.046) was associated with motivation to change the relevant behavior. About 40% of the participants did not have encouragers for these behaviors. Conclusions Identification of new encouragers within networks and targeting natural encouragers (e.g., children, spouses) may increase the efficacy of interventions to motivate behavioral changes among Mexican origin adults. PMID:22208416
Ritota, Mena; Casciani, Lorena; Valentini, Massimiliano
2013-05-01
Analytical traceability of PGI and PDO foods (Protected Geographical Indication and Protected Denomination Origin respectively) is one of the most challenging tasks of current applied research. Here we proposed a metabolomic approach based on the combination of (1)H high-resolution magic angle spinning-nuclear magnetic resonance (HRMAS-NMR) spectroscopy with multivariate analysis, i.e. PLS-DA, as a reliable tool for the traceability of Italian PGI chicories (Cichorium intybus L.), i.e. Radicchio Rosso di Treviso and Radicchio Variegato di Castelfranco, also known as red and red-spotted, respectively. The metabolic profile was gained by means of HRMAS-NMR, and multivariate data analysis allowed us to build statistical models capable of providing clear discrimination among the two varieties and classification according to the geographical origin. Based on Variable Importance in Projection values, the molecular markers for classifying the different types of red chicories analysed were found accounting for both the cultivar and the place of origin. © 2012 Society of Chemical Industry.
Ashida, Sato; Wilkinson, Anna V; Koehly, Laura M
2012-01-01
To evaluate whether influence from social network members is associated with motivation to change dietary and physical activity behaviors. Baseline assessment followed by mailing of family health history-based personalized messages (2 weeks) and follow-up assessment (3 months). Families from an ongoing population-based cohort in Houston, Texas. 475 adults from 161 Mexican-origin families. Out of 347 households contacted, 162 (47%) participated. Family health history, social networks, and motivation to change behaviors. Two-level logistic regression modeling. Having at least one network member who encourages one to eat more fruits and vegetables (p = .010) and to engage in regular physical activity (p = .046) was associated with motivation to change the relevant behavior. About 40% of the participants did not have encouragers for these behaviors. Identification of new encouragers within networks and targeting natural encouragers (e.g., children, spouses) may increase the efficacy of interventions to motivate behavioral changes among Mexican-origin adults.
Nguyen, Huong Thi Thu; Kitaoka, Kazuyo; Sukigara, Masune; Thai, Anh Lan
2018-03-01
This study aimed to create a Vietnamese version of both the Maslach Burnout Inventory-General Survey (MBI-GS) and Areas of Worklife Scale (AWS) to assess the burnout state of Vietnamese clinical nurses and to develop a causal model of burnout of clinical nurses. We conducted a descriptive design using a cross-sectional survey. The questionnaire was hand divided directly by nursing departments to 500 clinical nurses in three hospitals. Vietnamese MBI-GS and AWS were then examined for reliability and validity. We used the revised exhaustion +1 burnout classification to access burnout state. We performed path analysis to develop a Vietnamese causal model based on the original model by Leiter and Maslach's theory. We found that both scales were reliable and valid for assessing burnout. Among nurse participants, the percentage of severe burnout was 0.7% and burnout was 15.8%, and 17.2% of nurses were exhausted. The best predictor of burnout was "on-duty work schedule" that clinical nurses have to work for 24 hours. In the causal model, we also found similarity and difference pathways in comparison with the original model. Vietnamese MBI-GS and AWS were applicable to research on occupational stress. Nearly one-fifth of Vietnamese clinical nurses were working in burnout state. The causal model suggested a range of factors resulting in burnout, and it is necessary to consider the specific solution to prevent burnout problem. Copyright © 2018. Published by Elsevier B.V.
Anomalous neuronal responses to fluctuated inputs
NASA Astrophysics Data System (ADS)
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain.
Modeling of Pedestrian Flows Using Hybrid Models of Euler Equations and Dynamical Systems
NASA Astrophysics Data System (ADS)
Bärwolff, Günter; Slawig, Thomas; Schwandt, Hartmut
2007-09-01
In the last years various systems have been developed for controlling, planning and predicting the traffic of persons and vehicles, in particular under security aspects. Going beyond pure counting and statistical models, approaches were found to be very adequate and accurate which are based on well-known concepts originally developed in very different research areas, namely continuum mechanics and computer science. In the present paper, we outline a continuum mechanical approach for the description of pedestrain flow.
Neuron Bifurcations in an Analog Electronic Burster
NASA Astrophysics Data System (ADS)
Savino, Guillermo V.; Formigli, Carlos M.
2007-05-01
Although bursting electrical activity is typical in some brain neurons and biological excitable systems, its functions and mechanisms of generation are yet unknown. In modeling such complex oscillations, analog electronic models are faster than mathematical ones, whether phenomenologically or theoretically based. We show experimentally that bursting oscillator circuits can be greatly simplified by using the nonlinear characteristics of two bipolar transistors. Since our circuit qualitatively mimics Hodgkin and Huxley model neurons bursting activity, and bifurcations originating neuro-computational properties, it is not only a caricature but a realistic model.
2012-01-19
specific dislocation reactions. Rae et al .[4,5,7] proposed micromechanisms for primary creep caused by SF shearing of c0 precipitates by ah112i...near the [0 0 1] was done by Matan et al .[3] They proposed a phenomenological creep model, which was adopted from Gilman’s dislocation density model...the original loading orientation). MacLachlan et al .[18 21] proposed a series of creep models for anisotropic creep of single-crystal superalloys. Their
A Grade 6 Project in the Social Studies: The Wall of Old Jerusalem.
ERIC Educational Resources Information Center
Ediger, Marlow
1993-01-01
Presents a classroom lesson based on the walls of old Jerusalem. Maintains that cooperative-learning techniques used to build a model of the wall helped students understand the meaning of the original wall and the division of modern-day Jerusalem. (CFR)
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
Origin of condensation nuclei in the springtime polar stratosphere
NASA Technical Reports Server (NTRS)
Zhao, Jingxia; Toon, Owen B.; Turco, Richard P.
1995-01-01
An enhanced sulfate aerosol layer has been observed near 25 km accompanying springtime ozone depletion in the Antarctic stratosphere. We use a one-dimensional aerosol model that includes photochemistry, particle nucleation, condensational growth, coagulation, and sedimentation to study the origin of the layer. Annual cycles of sunlight, temperature, and ozone are incorporated into the model. Our results indicate that binary homogeneous nucleation leads to the formation of very small droplets of sulfuric acid and water under conditions of low temperature and production of H2SO4 following polar sunrise. Photodissociation of carbonyl sulfide (OCS) alone, however, cannot provide sufficient SO2 to create the observed condensation nuclei (CN) layer. When subsidence of SO2 from very high altitudes in the polar night vortex is incorporated into the model, the CN layer is reasonably reproduced. The model predictions, based on the subsidence in polar vortex, agree with in situ measurements of particle concentration, vertical distribution, and persistence during polar spring.
Origin of Condensation Nuclei in the Springtime Polar Stratosphere
NASA Technical Reports Server (NTRS)
Zhao, Jingxia; Toon, Owen B.; Turco, Richard P.
1995-01-01
An enhanced sulfate aerosol layer has been observed near 25 km accompanying springtime ozone depletion in the Antarctic stratosphere. We use a one-dimensional aerosol model that includes photochemistry, particle nucleation, condensational growth, coagulation, and sedimentation to study the origin of the layer. Annual cycles of sunlight, temperature, and ozone are incorporated into the model. Our results indicate that binary homogeneous nucleation leads to the formation of very small droplets of sulfuric acid and water under conditions of low temperature and production of H2SO4 following polar sunrise. Photodissociation of carbonyl sulfide (OCS) alone, however, cannot provide sufficient SO2 to create the observed condensation nuclei (CN) layer. When subsidence of SO2 from very high altitudes in the polar night vortex is incorporated into the model, the CN layer is reasonably reproduced. The model predictions, based on the subsidence in polar vortex, agree with in situ measurements of particle concentration, vertical distribution, and persistence during polar spring.
Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model
Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.
2008-01-01
This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less
Evolving Ideas on the Origin and Evolution of Flowers: New Perspectives in the Genomic Era
Chanderbali, Andre S.; Berger, Brent A.; Howarth, Dianella G.; Soltis, Pamela S.; Soltis, Douglas E.
2016-01-01
The origin of the flower was a key innovation in the history of complex organisms, dramatically altering Earth’s biota. Advances in phylogenetics, developmental genetics, and genomics during the past 25 years have substantially advanced our understanding of the evolution of flowers, yet crucial aspects of floral evolution remain, such as the series of genetic and morphological changes that gave rise to the first flowers; the factors enabling the origin of the pentamerous eudicot flower, which characterizes ∼70% of all extant angiosperm species; and the role of gene and genome duplications in facilitating floral innovations. A key early concept was the ABC model of floral organ specification, developed by Elliott Meyerowitz and Enrico Coen and based on two model systems, Arabidopsis thaliana and Antirrhinum majus. Yet it is now clear that these model systems are highly derived species, whose molecular genetic-developmental organization must be very different from that of ancestral, as well as early, angiosperms. In this article, we will discuss how new research approaches are illuminating the early events in floral evolution and the prospects for further progress. In particular, advancing the next generation of research in floral evolution will require the development of one or more functional model systems from among the basal angiosperms and basal eudicots. More broadly, we urge the development of “model clades” for genomic and evolutionary-developmental analyses, instead of the primary use of single “model organisms.” We predict that new evolutionary models will soon emerge as genetic/genomic models, providing unprecedented new insights into floral evolution. PMID:27053123
Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong
2018-07-15
The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bertagnolio, Franck; Madsen, Helge Aa.; Fischer, Andreas; Bak, Christian
2018-06-01
In the above-mentioned paper, two model formulae were tuned to fit experimental data of surface pressure spectra measured in various wind tunnels. They correspond to high and low Reynolds number flow scalings, respectively. It turns out that there exist typographical errors in both formulae numbered (9) and (10) in the original paper. There, these formulae read:
Art and science: geodesy in materials science.
Kroto, Harold
2010-09-01
A 3-dimensional model based on a molecular structural recipe having some unique and unexpected shape characteristics is demonstrated. The project was originally initiated to satisfy the aesthetic creative impulse to build a 3-dimensional model or sculpture. Further scientific investigation explained some important nanoscale structural observations that had been seen many years beforehand and mistakenly explained. This is a rare example of artistic creativity resulting in a key scientific advance.
A Model of the Vela Supernova Remnant
NASA Astrophysics Data System (ADS)
Gvaramadze, Vasilii
2000-10-01
A model of the Vela supernova remnant (SNR) based on a cavity explosion of a supernova (SN) star is proposed. It is suggested that the general structure of the remnant is determined by the interaction of the SN blast wave with a massive shell created by the SN progenitor (15-20 M_solar) star. A possible origin of the nebula of hard X-ray emission detected around the Vela pulsar is discussed.
Structural models of antibody variable fragments: A method for investigating binding mechanisms
NASA Astrophysics Data System (ADS)
Petit, Samuel; Brard, Frédéric; Coquerel, Gérard; Perez, Guy; Tron, François
1998-03-01
The value of comparative molecular modeling for elucidating structure-function relationships was demonstrated by analyzing six anti-nucleosome autoantibody variable fragments. Structural models were built using the automated procedure developed in the COMPOSER software, subsequently minimized with the AMBER force field, and validated according to several standard geometric and chemical criteria. Canonical class assignment from Chothia and Lesk's [Chottin and Lesk, J. Mol. Biol., 196 (1987) 901; Chothia et al., Nature, 342 (1989) 877] work was used as a supplementary validation tool for five of the six hypervariable loops. The analysis, based on the hypothesis that antigen binding could occur through electrostatic interactions, reveals a diversity of possible binding mechanisms of anti-nucleosome or anti-histone antibodies to their cognate antigen. These results lead us to postulate that anti-nucleosome autoantibodies could have different origins. Since both anti-DNA and anti-nculeosome autoantibodies are produced during the course of systemic lupus erythematosus, a non-organ specific autoimmune disease, a comparative structural and electrostatic analysis of the two populations of autoantibodies may constitute a way to elucidate their origin and the role of the antigen in tolerance breakdown. The present study illustrates some interests, advantages and limits of a methodology based on the use of comparative modeling and analysis of molecular surface properties.
Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae).
Schwerk, Axel; Szyszko, Jan
2011-01-01
Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a 'delay phase', an 'increase phase' and a 'stagnation phase' were identified. In general, the runs of succession could be described by four different parameters: (1) 'Initial degradation level', (2) 'delay', (3) 'increase rate' and (4) 'recovery level'. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession.
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.
2017-12-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.
Breeze, John; Clasper, J C
2013-12-01
Explosively propelled fragments are the most common cause of injury to soldiers on current operations. Researchers desire models to predict their injurious effects so as to refine methods of potential protection. Well validated physical and numerical models based on the penetration of standardised fragment simulating projectiles (FSPs) through muscle exist but not for skin, thereby reducing the utility of such models. A systematic review of the literature was undertaken using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology to identify all open source information quantifying the effects of postmortem human subject (PMHS) and animal skin on the retardation of metallic projectiles. Projectile sectional density (mass over presented cross-sectional area) was compared with the velocity required for skin perforation or penetration, with regard to skin origin (animal vs PMHS), projectile shape (sphere vs cylinder) and skin backing (isolated skin vs that backed by muscle). 17 original experimental studies were identified, predominantly using skin from the thigh. No statistical difference in the velocity required for skin perforation with regard to skin origin or projectile shape was found. A greater velocity was required to perforate intact skin on a whole limb than isolated skin alone (p<0.05). An empirical relationship describing the velocity required to perforate skin by metallic FSPs of a range of sectional densities was generated. Skin has a significant effect on the retardation of FSPs, necessitating its incorporation in future injury models. Perforation algorithms based on animal and PMHS skin can be used interchangeably as well as spheres and cylinders of matching sectional density. Future numerical simulations for skin perforation must match the velocity for penetration and also require experimental determination of mechanical skin properties, such as tensile strength, strain and elasticity at high strain rates.
Ferguson, Christopher J; Donnellan, M Brent
2017-12-01
Gabbiadini, A., Riva, P., Andrighetto, L., Volpato, C., & Bushman, B, (PloS ONE, 2016) provided evidence for a connection between "sexist" video games and decreased empathy toward girls using an experimental paradigm. These claims are based on a moderated mediation model. They reported a three-way interaction between game condition, gender, and avatar identification when predicting masculine ideology in their original study. Masculine ideology was associated, in turn, with decreased empathy. However, there were no main experimental effects for video game condition on empathy. The current analysis considers the strength of the evidence for claims made in the original study on a sample of 153 adolescents (M age = 16.812, SD = 1.241; 44.2% male). We confirmed that there was little evidence for an overall effect of game condition on empathy toward girls or women. We tested the robustness of the original reported moderated mediation models against other, theoretically derived alternatives, and found that effects differed based on how variables were measured (using alternatives in their public data file) and the statistical model used. The experimental groups differed significantly and substantially in terms of age suggesting that there might have been issues with the procedures used to randomly assign participants to conditions. These results highlight the need for preregistration of experimental protocols in video game research and raise some concerns about how moderated mediation models are used to support causal inferences. These results call into question whether use of "sexist" video games is a causal factor in the development of reduced empathy toward girls and women among adolescents.
Chakraborty, Debojyoti; Wang, Tongli; Andre, Konrad; Konnert, Monika; Lexer, Manfred J; Matulla, Christoph; Schueler, Silvio
2015-01-01
Identifying populations within tree species potentially adapted to future climatic conditions is an important requirement for reforestation and assisted migration programmes. Such populations can be identified either by empirical response functions based on correlations of quantitative traits with climate variables or by climate envelope models that compare the climate of seed sources and potential growing areas. In the present study, we analyzed the intraspecific variation in climate growth response of Douglas-fir planted within the non-analogous climate conditions of Central and continental Europe. With data from 50 common garden trials, we developed Universal Response Functions (URF) for tree height and mean basal area and compared the growth performance of the selected best performing populations with that of populations identified through a climate envelope approach. Climate variables of the trial location were found to be stronger predictors of growth performance than climate variables of the population origin. Although the precipitation regime of the population sources varied strongly none of the precipitation related climate variables of population origin was found to be significant within the models. Overall, the URFs explained more than 88% of variation in growth performance. Populations identified by the URF models originate from western Cascades and coastal areas of Washington and Oregon and show significantly higher growth performance than populations identified by the climate envelope approach under both current and climate change scenarios. The URFs predict decreasing growth performance at low and middle elevations of the case study area, but increasing growth performance on high elevation sites. Our analysis suggests that population recommendations based on empirical approaches should be preferred and population selections by climate envelope models without considering climatic constrains of growth performance should be carefully appraised before transferring populations to planting locations with novel or dissimilar climate.
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
NASA Technical Reports Server (NTRS)
Kimball, John; Kang, Sinkyu
2003-01-01
The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.
Acuity-adaptable nursing care: exploring its place in designing the future patient room.
Kwan, Melissa A
2011-01-01
To substantiate the anticipated benefits of the original acuity-adaptable care delivery model as defined by innovator Ann Hendrich. In today's conveyor belt approach to healthcare, upon admission and through discharge, patients are commonly transferred based on changing acuity needs. Wasted time and money and inefficiencies in hospital operations often result-in addition to jeopardizing patient safety. In the last decade, a handful of hospitals pioneered the implementation of the acuity-adaptable care delivery model. Built on the concept of eliminating patient transfers, the projected outcomes of acuity-adaptable units-decreased average lengths of stay, increased patient safety and satisfaction, and increased nurses' satisfaction from reduced walking distances-make a good case for a model patient room. Although some hospitals experienced the projected benefits of the acuity-adaptable care delivery model, sustaining the outcomes proved to be difficult; hence, the original definition of acuity-adaptable units has not fared well. Variations on the original concept demonstrate that eliminating patient transfers has not been completely abandoned in healthcare redesign and construction initiatives. Terms such as flex-up, flex-down, universal room, and single-stay unit have since emerged. These variations convolute the search for empirical evidence to support the anticipated benefits of the original concept. To determine the future of this concept and its variants, a significant amount of outcome data must be generated by piloting the concept in different hospital settings. As further refinements and adjustments to the concept emerge, the acuity-adaptable room may find a place in future hospitals.
NASA Astrophysics Data System (ADS)
Stoch, B.; Anthonissen, C. J.; McCall, M.-J.; Basson, I. J.; Deacon, J.; Cloete, E.; Botha, J.; Britz, J.; Strydom, M.; Nel, D.; Bester, M.
2017-12-01
The Sishen deposit is one of the largest iron ore concentrations in current production. Hematite mineralization occurs along a strike length of 14 km, with a width of 3.2 km and a maximum vertical extent of 400 m below the original surface. The 986-Mt reserve incorporates a suite of individual orebodies, beneath a locally preserved tectonized unconformity, with a wide range of geometries, depths, and orientations. Fully constrained, implicit 3D modeling of the entire mining volume (> 70 km3), was undertaken to the original, pre-mining topography. The model incorporates 5287 mapping points and > 21,000 drillholes and provides exceptional insight into the original configuration of ore and its relationship to contacts, unconformities, and structures in the enclosing country rock. The bulk of ore occurs to the west of a strike-extensive, partially inverted normal fault (Sloep Fault), within an asymmetrical synclinal structure on its western flank. This linear, N-S distribution of deep, thick ore is punctuated by palaeosinkholes, wherein base-of-ore dips of greater than 45°, are concentrically arranged. Localized ore volumes also occur along faults and in fault-bounded, downthrown blocks, to the north of NW-SE- and NE-SW-trending strike-slip faults that show relatively minor uplift to the south, probably due to the Lomanian Namaqua-Natal Orogeny. The revised model demonstrates the proximity of ore to a tectonized unconformity and highlights the structural control on ore volumes, implying that Fe mineralization at Sishen cannot be exclusively attributed to supergene enrichment and concentric palaeosinkhole formation.
Adaptive Nonparametric Kinematic Modeling of Concentric Tube Robots.
Fagogenis, Georgios; Bergeles, Christos; Dupont, Pierre E
2016-10-01
Concentric tube robots comprise telescopic precurved elastic tubes. The robot's tip and shape are controlled via relative tube motions, i.e. tube rotations and translations. Non-linear interactions between the tubes, e.g. friction and torsion, as well as uncertainty in the physical properties of the tubes themselves, e.g. the Young's modulus, curvature, or stiffness, hinder accurate kinematic modelling. In this paper, we present a machine-learning-based methodology for kinematic modelling of concentric tube robots and in situ model adaptation. Our approach is based on Locally Weighted Projection Regression (LWPR). The model comprises an ensemble of linear models, each of which locally approximates the original complex kinematic relation. LWPR can accommodate for model deviations by adjusting the respective local models at run-time, resulting in an adaptive kinematics framework. We evaluated our approach on data gathered from a three-tube robot, and report high accuracy across the robot's configuration space.
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Analytical investigation of the faster-is-slower effect with a simplified phenomenological model
NASA Astrophysics Data System (ADS)
Suzuno, K.; Tomoeda, A.; Ueyama, D.
2013-11-01
We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.
Gerhardt, Natalie; Birkenmeier, Markus; Schwolow, Sebastian; Rohn, Sascha; Weller, Philipp
2018-02-06
This work describes a simple approach for the untargeted profiling of volatile compounds for the authentication of the botanical origins of honey based on resolution-optimized HS-GC-IMS combined with optimized chemometric techniques, namely PCA, LDA, and kNN. A direct comparison of the PCA-LDA models between the HS-GC-IMS and 1 H NMR data demonstrated that HS-GC-IMS profiling could be used as a complementary tool to NMR-based profiling of honey samples. Whereas NMR profiling still requires comparatively precise sample preparation, pH adjustment in particular, HS-GC-IMS fingerprinting may be considered an alternative approach for a truly fully automatable, cost-efficient, and in particular highly sensitive method. It was demonstrated that all tested honey samples could be distinguished on the basis of their botanical origins. Loading plots revealed the volatile compounds responsible for the differences among the monofloral honeys. The HS-GC-IMS-based PCA-LDA model was composed of two linear functions of discrimination and 10 selected PCs that discriminated canola, acacia, and honeydew honeys with a predictive accuracy of 98.6%. Application of the LDA model to an external test set of 10 authentic honeys clearly proved the high predictive ability of the model by correctly classifying them into three variety groups with 100% correct classifications. The constructed model presents a simple and efficient method of analysis and may serve as a basis for the authentication of other food types.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
Role of core excitation in (d ,p ) transfer reactions
NASA Astrophysics Data System (ADS)
Deltuva, A.; Ross, A.; Norvaišas, E.; Nunes, F. M.
2016-10-01
Background: Recent work found that core excitations can be important in extracting structure information from (d ,p ) reactions. Purpose: Our objective is to systematically explore the role of core excitation in (d ,p ) reactions and to understand the origin of the dynamical effects. Method: Based on the particle-rotor model of n +10Be , we generate a number of models with a range of separation energies (Sn=0.1 -5.0 MeV), while maintaining a significant core excited component. We then apply the latest extension of the momentum-space-based Faddeev method, including dynamical core excitation in the reaction mechanism to all orders, to the 10Be(d ,p )11Be -like reactions, and study the excitation effects for beam energies Ed=15 -90 MeV. Results: We study the resulting angular distributions and the differences between the spectroscopic factor that would be extracted from the cross sections, when including dynamical core excitation in the reaction, and that of the original structure model. We also explore how different partial waves affect the final cross section. Conclusions: Our results show a strong beam-energy dependence of the extracted spectroscopic factors that become smaller for intermediate beam energies. This dependence increases for loosely bound systems.
Ecosystem service provision: an operational way for marine biodiversity conservation and management.
Cognetti, Giuseppe; Maltagliati, Ferruccio
2010-11-01
Since no extensive conceptual framework has been developed on the issues of ecosystem service (ES) and service provider (SP) in the marine environment, we have made an attempt to apply these to the conservation and management of marine biodiversity. Within this context, an accurate individuation of SPs, namely the biological component of a given ecosystem that supports human activities is fundamental. SPs are the agents responsible for making the ES-based approach operational. The application of these concepts to the marine environment should be based on an model different to the terrestrial one. In the latter, the basic model envisages a matrix of a human-altered landscape with fragments of original biodiversity; conversely, in the marine environment the model provides fragments where human activities are carried out and the matrix is represented by the original biodiversity. We have identified three main classes of ES provision: in natural, disturbed and human-controlled environments. Economic valuation of marine ESs is an essential condition for making conservation strategies financially sustainable, as it may stimulate the perceived need for investing in protection and exploitation of marine resources. Copyright © 2010 Elsevier Ltd. All rights reserved.
Effects of waveform model systematics on the interpretation of GW150914
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.
2017-05-01
Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.
A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information
NASA Astrophysics Data System (ADS)
Ozbek, M. M.
2003-12-01
Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ
[Discussion on the botanical origin of Isatidis radix and Isatidis folium based on DNA barcoding].
Sun, Zhi-Ying; Pang, Xiao-Hui
2013-12-01
This paper aimed to investigate the botanical origins of Isatidis Radix and Isatidis Folium, and clarify the confusion of its classification. The second internal transcribed spacer (ITS2) of ribosomal DNA, the chloroplast matK gene of 22 samples from some major production areas were amplified and sequenced. Sequence assembly and consensus sequence generation were performed using the CodonCode Aligner. Phylogenetic study was performed using MEGA 4.0 software in accordance with the Kimura 2-Parameter (K2P) model, and the phylogenetic tree was constructed using the neighbor-joining methods. The results showed that the length of ITS2 sequence of the botanical origins of Isatidis Radix and Isatidis Folium was 191 bp. The sequence showed that some samples had several SNP sites, and some samples had heterozygosis sites. In the NJ tree, based on ITS2 sequence, the studied samples were separated into two groups, and one of them was gathered with Isatis tinctoria L. The studied samples also were divided into two groups obviously based on the chloroplast matK gene. In conclusion, our results support that the botanical origins of Isatidis Radix and Isatidis Folium are Isatis indigotica Fortune, and Isatis indigotica and Isatis tinctoria are two distinct species. This study doesn't support the opinion about the combination of these two species in Flora of China.
SEIR Model of Rumor Spreading in Online Social Network with Varying Total Population Size
NASA Astrophysics Data System (ADS)
Dong, Suyalatu; Deng, Yan-Bin; Huang, Yong-Chang
2017-10-01
Based on the infectious disease model with disease latency, this paper proposes a new model for the rumor spreading process in online social network. In this paper what we establish an SEIR rumor spreading model to describe the online social network with varying total number of users and user deactivation rate. We calculate the exact equilibrium points and reproduction number for this model. Furthermore, we perform the rumor spreading process in the online social network with increasing population size based on the original real world Facebook network. The simulation results indicate that the SEIR model of rumor spreading in online social network with changing total number of users can accurately reveal the inherent characteristics of rumor spreading process in online social network. Supported by National Natural Science Foundation of China under Grant Nos. 11275017 and 11173028
Coon, William F.
2011-01-01
Simulation of streamflows in small subbasins was improved by adjusting model parameter values to match base flows, storm peaks, and storm recessions more precisely than had been done with the original model. Simulated recessional and low flows were either increased or decreased as appropriate for a given stream, and simulated peak flows generally were lowered in the revised model. The use of suspended-sediment concentrations rather than concentrations of the surrogate constituent, total suspended solids, resulted in increases in the simulated low-flow sediment concentrations and, in most cases, decreases in the simulated peak-flow sediment concentrations. Simulated orthophosphate concentrations in base flows generally increased but decreased for peak flows in selected headwater subbasins in the revised model. Compared with the original model, phosphorus concentrations simulated by the revised model were comparable in forested subbasins, generally decreased in developed and wetland-dominated subbasins, and increased in agricultural subbasins. A final revision to the model was made by the addition of the simulation of chloride (salt) concentrations in the Onondaga Creek Basin to help water-resource managers better understand the relative contributions of salt from multiple sources in this particular tributary. The calibrated revised model was used to (1) compute loading rates for the various land types that were simulated in the model, (2) conduct a watershed-management analysis that estimated the portion of the total load that was likely to be transported to Onondaga Lake from each of the modeled subbasins, (3) compute and assess chloride loads to Onondaga Lake from the Onondaga Creek Basin, and (4) simulate precolonization (forested) conditions in the basin to estimate the probable minimum phosphorus loads to the lake.
Adaptive surrogate model based multiobjective optimization for coastal aquifer management
NASA Astrophysics Data System (ADS)
Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin
2018-06-01
In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.
Quantum Gravity and Cosmology: an intimate interplay
NASA Astrophysics Data System (ADS)
Sakellariadou, Mairi
2017-08-01
I will briefly discuss three cosmological models built upon three distinct quantum gravity proposals. I will first highlight the cosmological rôle of a vector field in the framework of a string/brane cosmological model. I will then present the resolution of the big bang singularity and the occurrence of an early era of accelerated expansion of a geometric origin, in the framework of group field theory condensate cosmology. I will then summarise results from an extended gravitational model based on non-commutative spectral geometry, a model that offers a purely geometric explanation for the standard model of particle physics.
Cognitive Control Predicts Use of Model-Based Reinforcement-Learning
Otto, A. Ross; Skatova, Anya; Madlon-Kay, Seth; Daw, Nathaniel D.
2015-01-01
Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information—in the service of overcoming habitual, stimulus-driven responses—in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior. PMID:25170791
Adaptive correlation filter-based video stabilization without accumulative global motion estimation
NASA Astrophysics Data System (ADS)
Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil
2014-12-01
We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.
An option space for early neural evolution.
Jékely, Gáspár; Keijzer, Fred; Godfrey-Smith, Peter
2015-12-19
The origin of nervous systems has traditionally been discussed within two conceptual frameworks. Input-output models stress the sensory-motor aspects of nervous systems, while internal coordination models emphasize the role of nervous systems in coordinating multicellular activity, especially muscle-based motility. Here we consider both frameworks and apply them to describe aspects of each of three main groups of phenomena that nervous systems control: behaviour, physiology and development. We argue that both frameworks and all three aspects of nervous system function need to be considered for a comprehensive discussion of nervous system origins. This broad mapping of the option space enables an overview of the many influences and constraints that may have played a role in the evolution of the first nervous systems. © 2015 The Author(s).
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassem Jebai, Al; Malrait, François; Martin, Philippe; Rouchon, Pierre
2016-03-01
Sensorless control of permanent-magnet synchronous motors at low velocity remains a challenging task. A now well-established method consists of injecting a high-frequency signal and using the rotor saliency, both geometric and magnetic-saturation induced. This paper proposes a clear and original analysis based on second-order averaging of how to recover the position information from signal injection; this analysis blends well with a general model of magnetic saturation. It also proposes a simple parametric model of the saturated motor, based on an energy function which simply encompasses saturation and cross-saturation effects. Experimental results on a surface-mounted motor and an interior magnet motor illustrate the relevance of the approach.
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan
2018-04-01
We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.
Agent based modeling in tactical wargaming
NASA Astrophysics Data System (ADS)
James, Alex; Hanratty, Timothy P.; Tuttle, Daniel C.; Coles, John B.
2016-05-01
Army staffs at division, brigade, and battalion levels often plan for contingency operations. As such, analysts consider the impact and potential consequences of actions taken. The Army Military Decision-Making Process (MDMP) dictates identification and evaluation of possible enemy courses of action; however, non-state actors often do not exhibit the same level and consistency of planned actions that the MDMP was originally designed to anticipate. The fourth MDMP step is a particular challenge, wargaming courses of action within the context of complex social-cultural behaviors. Agent-based Modeling (ABM) and its resulting emergent behavior is a potential solution to model terrain in terms of the human domain and improve the results and rigor of the traditional wargaming process.
Initially Researches for the Development of SSME under the Background of IOT
NASA Astrophysics Data System (ADS)
Han, Kun; Liu, Shurong; Zhang, Dacheng; Han, Ying
The Internet Of Things (IOT) is proposed in the 1990s. The original intention has been put forward is people to things and things to things can deliver information like person as. IOT broke the human traditional thinking. This paper researches the practical function of IOT to expand the theory of Services Sciences, Management and Engineering (SSME). On the analysis of the key technology and model of IOT, the events-driven SSME model based on IOT, and the IOT framework based on SSME, it further studies the importance of IOT in the field of SSME.
Thickness optimization of auricular silicone scaffold based on finite element analysis.
Jiang, Tao; Shang, Jianzhong; Tang, Li; Wang, Zhuo
2016-01-01
An optimized thickness of a transplantable auricular silicone scaffold was researched. The original image data were acquired from CT scans, and reverse modeling technology was used to build a digital 3D model of an auricle. The transplant process was simulated in ANSYS Workbench by finite element analysis (FEA), solid scaffolds were manufactured based on the FEA results, and the transplantable artificial auricle was finally obtained with an optimized thickness, as well as sufficient intensity and hardness. This paper provides a reference for clinical transplant surgery. Copyright © 2015 Elsevier Ltd. All rights reserved.
A noise model for the evaluation of defect states in solar cells
Landi, G.; Barone, C.; Mauro, C.; Neitzert, H. C.; Pagano, S.
2016-01-01
A theoretical model, combining trapping/detrapping and recombination mechanisms, is formulated to explain the origin of random current fluctuations in silicon-based solar cells. In this framework, the comparison between dark and photo-induced noise allows the determination of important electronic parameters of the defect states. A detailed analysis of the electric noise, at different temperatures and for different illumination levels, is reported for crystalline silicon-based solar cells, in the pristine form and after artificial degradation with high energy protons. The evolution of the dominating defect properties is studied through noise spectroscopy. PMID:27412097
DOT National Transportation Integrated Search
2001-06-30
Freight movements within large metropolitan areas are much less studied and analyzed than personal travel. This casts doubt on the results of much conventional travel demand modeling and planning. With so much traffic overlooked, how plausible are th...
The Relevance and Efficacy of Metacognition for Instructional Design in the Domain of Mathematics
ERIC Educational Resources Information Center
Baten, Elke; Praet, Magda; Desoete, Annemie
2017-01-01
The efficacy of metacognition as theory-based instructional principle or technique in general, and particularly in mathematics, is explored. Starting with an overview of different definitions, conceptualizations, assessment and training models originating from cognitive information processing theory, the role of metacognition in teaching and…
TRANSPORT AND SURVIVAL OF VIRUSES IN THE SUBSURFACE PROCESSES, EXPERIMENTS, AND SIMULATION MODELS
The remediation of ground water contaminated with waterborne pathogens, in particular with viruses, is based on their probable or actual ability to be transported from the source of origin to a point of withdrawal while maintaining the capacity to cause infections. The transport ...
Multicultural Choral Music Pedagogy Based on the Facets Model
ERIC Educational Resources Information Center
Yoo, Hyesoo
2017-01-01
Multicultural choral music has distinct characteristics in that indigenous folk elements are frequently incorporated into a Western European tonal system. Because of this, multicultural choral music is often taught using Western styles (e.g., "bel canto") rather than through traditional singing techniques from their cultures of origin.…
Preparing Current and Future Practitioners to Integrate Research in Real Practice Settings
ERIC Educational Resources Information Center
Thyer, Bruce A.
2015-01-01
Past efforts aimed at promoting a better integration between research and practice are reviewed. These include the empirical clinical practice movement (ECP), originating within social work; the empirically supported treatment (EST) initiative of clinical psychology; and the evidence-based practice (EBP) model developed within medicine. The…
ERIC Educational Resources Information Center
Thomasgard, Michael; Warfield, Janeece
2005-01-01
Thomasgard, a physician, and Warfield, a psychologist, describe the multidisciplinary Collaborative Peer Supervision Group Project, originally developed and implemented in Columbus, Ohio. Collaborative Peer Supervision Groups (CPSGs) foster the development of case-based, interdisciplinary, continuing education. CPSGs are designed to improve the…
ERIC Educational Resources Information Center
Elliott, Shannon Snyder
2007-01-01
The purpose of this study is to first develop an 8-week college teaching module based on root competition literature. The split-root technique is adapted for the teaching laboratory, and the Sugar Ann English pea (Pisum sativum var. Sugar Ann English) is selected as the species of interest prior to designing experiments, either original or…
Cells of Origin of Epithelial Ovarian Cancers
2015-09-01
cells in oral squamous cell carcinomas by a novel pathway-based lineage tracing approach in a murine model. ! 13! Specific aims: 1. Determine...SUNDARESAN Lineage tracing and clonal analysis of oral cancer initiating cells The goal of this project is to study cancer stem cells /cancer initiating...whether oral cancer cells genetically marked based on their activities for stem cell -related pathways exhibit cancer stem cell properties in vivo by
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
RSAT: regulatory sequence analysis tools.
Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques
2008-07-01
The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Stark, Renee G; John, Jürgen; Leidl, Reiner
2011-01-13
This study's aim was to develop a first quantification of the frequency and costs of adverse drug events (ADEs) originating in ambulatory medical practice in Germany. The frequencies and costs of ADEs were quantified for a base case, building on an existing cost-of-illness model for ADEs. The model originates from the U.S. health care system, its structure of treatment probabilities linked to ADEs was transferred to Germany. Sensitivity analyses based on values determined from a literature review were used to test the postulated results. For Germany, the base case postulated that about 2 million adults ingesting medications have will have an ADE in 2007. Health care costs related to ADEs in this base case totalled 816 million Euros, mean costs per case were 381 Euros. About 58% of costs resulted from hospitalisations, 11% from emergency department visits and 21% from long-term care. Base case estimates of frequency and costs of ADEs were lower than all estimates of the sensitivity analyses. The postulated frequency and costs of ADEs illustrate the possible size of the health problems and economic burden related to ADEs in Germany. The validity of the U.S. treatment structure used remains to be determined for Germany. The sensitivity analysis used assumptions from different studies and thus further quantified the information gap in Germany regarding ADEs. This study found costs of ADEs in the ambulatory setting in Germany to be significant. Due to data scarcity, results are only a rough indication.
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Schimpe, Michael; von Kuepach, Markus Edler
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately.For parameterization, a lifetime test study is conducted including storagemore » and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. The model error for the cell capacity loss in the application-based tests is at the end of testing below 1 % of the original cell capacity.« less
@TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.
Pons, Jean-Luc; Labesse, Gilles
2009-07-01
@TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/
@TOME-2: a new pipeline for comparative modeling of protein–ligand complexes
Pons, Jean-Luc; Labesse, Gilles
2009-01-01
@TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Dascalescu, Anca-Elena; Sabau, Adrian
2016-12-01
The paper presents an original analytical model of the hydrodynamic loads applied on the half-bridge of a circular settling tank. The calculus domain is defined using analytical geometry and the calculus of the local dynamic pressure is based on the radius from the center of the settling tank to the current area, i.e. the relative velocity of the fluid and the depth where the current area is located, i.e. the density of the fluid. Calculus of the local drag forces uses the discrete frontal cross sectional areas of the submerged structure in contact with the fluid. In the last stage is performed the reduction of the local drag forces in the appropriate points belonging to the main beam. This class of loads is producing the flexure of the main beam in a horizontal plane and additional twisting moments along this structure. Taking into account the hydrodynamic loads, the results of the theoretical models, i.e. the analytical model and the finite element model, may have an increased accuracy.
NASA Astrophysics Data System (ADS)
Pan, Yue; Cai, Yimao; Liu, Yefan; Fang, Yichen; Yu, Muxi; Tan, Shenghu; Huang, Ru
2016-04-01
TaOx-based resistive random access memory (RRAM) attracts considerable attention for the development of next generation nonvolatile memories. However, read current noise in RRAM is one of the critical concerns for storage application, and its microscopic origin is still under debate. In this work, the read current noise in TaOx-based RRAM was studied thoroughly. Based on a noise power spectral density analysis at room temperature and at ultra-low temperature of 25 K, discrete random telegraph noise (RTN) and continuous average current fluctuation (ACF) are identified and decoupled from the total read current noise in TaOx RRAM devices. A statistical comparison of noise amplitude further reveals that ACF depends strongly on the temperature, whereas RTN is independent of the temperature. Measurement results combined with conduction mechanism analysis show that RTN in TaOx RRAM devices arises from electron trapping/detrapping process in the hopping conduction, and ACF is originated from the thermal activation of conduction centers that form the percolation network. At last, a unified model in the framework of hopping conduction is proposed to explain the underlying mechanism of both RTN and ACF noise, which can provide meaningful guidelines for designing noise-immune RRAM devices.
Further evidence for a parent-of-origin effect at the NOP9 locus on language-related phenotypes.
Pettigrew, Kerry A; Frinton, Emily; Nudel, Ron; Chan, May T M; Thompson, Paul; Hayiou-Thomas, Marianna E; Talcott, Joel B; Stein, John; Monaco, Anthony P; Hulme, Charles; Snowling, Margaret J; Newbury, Dianne F; Paracchini, Silvia
2016-01-01
Specific language impairment (SLI) is a common neurodevelopmental disorder, observed in 5-10 % of children. Family and twin studies suggest a strong genetic component, but relatively few candidate genes have been reported to date. A recent genome-wide association study (GWAS) described the first statistically significant association specifically for a SLI cohort between a missense variant (rs4280164) in the NOP9 gene and language-related phenotypes under a parent-of-origin model. Replications of these findings are particularly challenging because the availability of parental DNA is required. We used two independent family-based cohorts characterised with reading- and language-related traits: a longitudinal cohort (n = 106 informative families) including children with language and reading difficulties and a nuclear family cohort (n = 264 families) selected for dyslexia. We observed association with language-related measures when modelling for parent-of-origin effects at the NOP9 locus in both cohorts: minimum P = 0.001 for phonological awareness with a paternal effect in the first cohort and minimum P = 0.0004 for irregular word reading with a maternal effect in the second cohort. Allelic and parental trends were not consistent when compared to the original study. A parent-of-origin effect at this locus was detected in both cohorts, albeit with different trends. These findings contribute in interpreting the original GWAS report and support further investigations of the NOP9 locus and its role in language-related traits. A systematic evaluation of parent-of-origin effects in genetic association studies has the potential to reveal novel mechanisms underlying complex traits.
Evaluating a Control System Architecture Based on a Formally Derived AOCS Model
NASA Astrophysics Data System (ADS)
Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas
2010-08-01
Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, T
2004-09-03
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
Do theoretical calculations really predict nodes in Fe-based superconductors?
NASA Astrophysics Data System (ADS)
Mazin, Igor
2011-03-01
It is well established that calculations based on the LDA band structure and the Hubbard model, with the parameters U ~ 1.3 - 1.6 eV, and J ~ 0.2 - 0.3 J (a ``UJ'' model), yield strongly anisotropic, and sometimes nodal gaps. The physical origin of this effect is well understood: the two leading terms in the model are ∑Uni ↑ni ↓ and ∑ ' Uninj . The former ensures that the coupling to spin fluctuations proceeds only through the like orbitals, and the latter, not being renormalized by the standard Tolmachev-Morel-Anderson logarithm, tends to equalize the positive and the negative order parameters. Both these features are suspect on a general physics basis: the leading magnetic interaction in itinerant systems is the Hund-rule coupling, which couples every orbital with all the others, and the pnictides, with the order parameter less than 20 meV, should have nearly as strong renormalization of the Coulomb pseudopotential as the conventional superconductors. I will argue that, instead of the UJ model, in pnictides one should use the ``I'' model, derived from the density functional theory (which is supposed to describe the static susceptibility on the mean field level very accurately). The ``I'' here is simply the Stoner factor, the second variation of the LSDA magnetic energy. Unfortunately, this approach is very unlikely to produce gap nodes as easily as the UJ model, indicating that one has to look elsewhere for the nodes origin.
Hybrid Modeling Improves Health and Performance Monitoring
NASA Technical Reports Server (NTRS)
2007-01-01
Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.
Fabrication of Custom-Shaped Grafts for Cartilage Regeneration
Koo, Seungbum; Hargreaves, Brian A.; Gold, Garry E.; Dragoo, Jason L.
2011-01-01
Transplantation of engineered cartilage grafts is a promising method to treat diseased articular cartilage. The interfacial areas between the graft and the native tissues play an important role in the successful integration of the graft to adjacent native tissues. The purposes of the study were to create a custom shaped graft through 3D tissue shape reconstruction and rapid-prototype molding methods using MRI data, and to test the accuracy of the custom shaped graft against the original anatomical defect. An iatrogenic defect on the distal femur was identified with a 1.5 Tesla MRI and its shape was reconstructed into a three-dimensional (3D) computer model by processing the 3D MRI data. First, the accuracy of the MRI-derived 3D model was tested against a laser-scan based 3D model of the defect. A custom-shaped polyurethane graft was fabricated from the laser-scan based 3D model by creating custom molds through computer aided design and rapid-prototyping methods. The polyurethane tissue was laser-scanned again to calculate the accuracy of this process compared to the original defect. The volumes of the defect models from MRI and laser-scan were 537 mm3 and 405 mm3, respectively, implying that the MRI model was 33% larger than the laser-scan model. The average (±SD) distance deviation of the exterior surface of the MRI model from the laser-scan model was 0.4±0.4 mm. The custom-shaped tissue created from the molds was qualitatively very similar to the original shape of the defect. The volume of the custom-shaped cartilage tissue was 463 mm3 which was 15% larger than the laser-scan model. The average (±SD) distance deviation between the two models was 0.04±0.19 mm. Custom-shaped engineered grafts can be fabricated from standard sequence 3-D MRI data with the use of CAD and rapid-prototyping technology, which may help solve the interfacial problem between native cartilage and graft, if the grafts are custom made for the specific defect. The major source of error in fabricating a 3D custom shaped cartilage graft appears to be the accuracy of a MRI data itself; however, the precision of the model is expected to increase by the utilization of advanced MR sequences with higher magnet strengths. PMID:21058268
Water Absorption Behavior of Hemp Hurds Composites
Stevulova, Nadezda; Cigasova, Julia; Purcz, Pavol; Schwarzova, Ivana; Kacik, Frantisek; Geffert, Anton
2015-01-01
In this paper, water sorption behavior of 28 days hardened composites based on hemp hurds and inorganic binder was studied. Two kinds of absorption tests on dried cube specimens in deionized water bath at laboratory temperature were performed. Short-term (after one hour water immersion) and long-term (up to 180 days) water absorption tests were carried out to study their durability. Short-term water sorption behavior of original hemp hurds composites depends on mean particle length of hemp and on binder nature. The comparative study of long-term water sorption behavior of composites reinforced with original and chemically modified hemp hurds in three reagents confirmed that surface treatment of filler influences sorption process. Based on evaluation of sorption curves using a model for composites based on natural fibers, diffusion of water molecules in composite reinforced with original and chemically modified hemp hurds is anomalous in terms of the Fickian behavior. The most significant decrease in hydrophility of hemp hurds was found in case of hemp hurds modified by NaOH and it relates to change in the chemical composition of hemp hurds, especially to a decrease in average degree of cellulose polymerization as well as hemicellulose content.