Construct the stable vendor managed inventory partnership through a profit-sharing approach
NASA Astrophysics Data System (ADS)
Li, S.; Yu, Z.; Dong, M.
2015-01-01
In real life, the vendor managed inventory (VMI) model is not always a stable supply chain partnership. This paper proposes a cooperative game based profit-sharing method to stabilize the VMI partnership. Specifically, in a B2C setting, we consider a VMI program including a manufacturer and multiple online retailers. The manufacturer provides the finished product at the equal wholesale price to multiple online retailers. The online retailers face the same customer demand information. We offer the model to compute the increased profits generated by information sharing for total possible VMI coalitions. Using the solution concept of Shapley value, the profit-sharing scheme is produced to fairly divide the total increased profits among the VMI members. We find that under a fair allocation scheme, the higher inventory cost of one VMI member increases the surplus of the other members. Furthermore, the manufacturer is glad to increase the size of VMI coalition, whereas, the retailers are delighted to limit the size of the alliance. Finally, the manufacturer can select the appropriate retailer to boost its surplus, which has no effect on the surplus of the other retailers. The numerical examples indicate that the grand coalition is stable under the proposed allocation scheme.
Leithner, Doris; Mahmoudi, Scherwin; Wichmann, Julian L; Martin, Simon S; Lenga, Lukas; Albrecht, Moritz H; Booz, Christian; Arendt, Christophe T; Beeres, Martin; D'Angelo, Tommaso; Bodelle, Boris; Vogl, Thomas J; Scholtz, Jan-Erik
2018-02-01
To investigate the impact of traditional (VMI) and noise-optimized virtual monoenergetic imaging (VMI+) algorithms on quantitative and qualitative image quality, and the assessment of stenosis in carotid and intracranial dual-energy CTA (DE-CTA). DE-CTA studies of 40 patients performed on a third-generation 192-slice dual-source CT scanner were included in this retrospective study. 120-kVp image-equivalent linearly-blended, VMI and VMI+ series were reconstructed. Quantitative analysis included evaluation of contrast-to-noise ratios (CNR) of the aorta, common carotid artery, internal carotid artery, middle cerebral artery, and basilar artery. VMI and VMI+ with highest CNR, and linearly-blended series were rated qualitatively. Three radiologists assessed artefacts and suitability for evaluation at shoulder height, carotid bifurcation, siphon, and intracranial using 5-point Likert scales. Detection and grading of stenosis were performed at carotid bifurcation and siphon. Highest CNR values were observed for 40-keV VMI+ compared to 65-keV VMI and linearly-blended images (P < 0.001). Artefacts were low in all qualitatively assessed series with excellent suitability for supraaortic artery evaluation at shoulder and bifurcation height. Suitability was significantly higher in VMI+ and VMI compared to linearly-blended images for intracranial and ICA assessment (P < 0.002). VMI and VMI+ showed excellent accordance for detection and grading of stenosis at carotid bifurcation and siphon with no differences in diagnostic performance. 40-keV VMI+ showed improved quantitative image quality compared to 65-keV VMI and linearly-blended series in supraaortic DE-CTA. VMI and VMI+ provided increased suitability for carotid and intracranial artery evaluation with excellent assessment of stenosis, but did not translate into increased diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Embattled All Male Admissions Policy at VMI: Will the Fort Fall?
ERIC Educational Resources Information Center
Stokes, Jerome W. D.; Groves, Allen W.
1990-01-01
In March 1989, the Justice Department began investigating the admissions policy of the Virginia Military Institute (VMI). Summarizes the legal theories advanced by both the VMI Foundation and Virginia's woman attorney general in defense of VMI's all-male tradition. Compares past single-sex admission cases with the VMI arguments. (MLF)
Clinical value of the VMI supplemental tests: a modified replication study.
Avi-Itzhak, Tamara; Obler, Doris Richard
2008-10-01
To carry out a modified replication of the study performed by Kulp and Sortor evaluating the clinical value of the information provided by Beery's visual-motor supplemental tests of Visual Perception (VP) and Motor Coordination (MC) in normally developed children. The objectives were to (a) estimate the correlations among the three tests scores; (b) assess the predictive power of the VP and MC scores in explaining the variance in Visual-Motor Integration (VMI) scores; and (c) examine whether poor performance on the VMI is related to poor performance on VP or MC. METHODS.: A convenience sample of 71 children ages 4 and 5 years (M = 4.62 +/- 0.43) participated in the study. The supplemental tests significantly (F = 9.59; dF = 2; p < or = 0. 001) explained 22% of the variance in VMI performance. Only VP was significantly related to VMI (beta = 0.39; T = 3.49) accounting for the total amount of explained variance. Using the study population norms, 11 children (16% of total sample) did poorly on the VMI; of those 11, 73% did poorly on the VP, and none did poorly on the MC. None of these 11 did poorly on both the VP and MC. Nine percent of total sample who did poorly on the VP performed within the norm on the VMI. Thirteen percent who performed poorly on the MC performed within the norm on the VMI. Using the VMI published norms, 14 children (20% of total sample) who did poorly on the VP performed within the norm on the VMI. Forty-eight percent who did poorly on MC performed within the norm on the VMI. Findings supported Kulp and Sortor's conclusions that each area should be individually evaluated during visual-perceptual assessment of children regardless of performance on the VMI.
Martin, Simon S; Albrecht, Moritz H; Wichmann, Julian L; Hüsers, Kristina; Scholtz, Jan-Erik; Booz, Christian; Bodelle, Boris; Bauer, Ralf W; Metzger, Sarah C; Vogl, Thomas J; Lehnert, Thomas
2017-02-01
To evaluate objective and subjective image quality of a noise-optimized virtual monoenergetic imaging (VMI+) reconstruction technique in dual-energy computed tomography (DECT) angiography prior to transcatheter aortic valve replacement (TAVR). Datasets of 47 patients (35 men; 64.1 ± 10.9 years) who underwent DECT angiography of heart and vascular access prior to TAVR were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monoenergetic (VMI) algorithms in 10-keV intervals from 40-100 keV. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of 564 arterial segments were evaluated. Subjective analysis was rated by three blinded observers using a Likert scale. Mean SNR and CNR were highest in 40 keV VMI+ series (SNR, 27.8 ± 13.0; CNR, 26.3 ± 12.7), significantly (all p < 0.001) superior to all VMI series, which showed highest values at 70 keV (SNR, 18.5 ± 7.6; CNR, 16.0 ± 7.4), as well as linearly-blended F_0.5 series (SNR, 16.8 ± 7.3; CNR, 13.6 ± 6.9). Highest subjective image quality scores were observed for 40, 50, and 60 keV VMI+ reconstructions (all p > 0.05), significantly superior to all VMI and standard linearly-blended images (all p < 0.01). Low-keV VMI+ reconstructions significantly increase CNR and SNR compared to VMI and standard linear-blending image reconstruction and improve subjective image quality in preprocedural DECT angiography in the context of TAVR planning. • VMI+ combines increased contrast with reduced image noise. • VMI+ shows substantially less image noise than traditional VMI. • 40-keV reconstructions show highest SNR/CNR of the aortic and iliofemoral access route. • Observers overall prefer 60 keV VMI+ images. • VMI+ DECT imaging helps improve image quality for TAVR planning.
Visual-Motor Integration in Children With Mild Intellectual Disability: A Meta-Analysis.
Memisevic, Haris; Djordjevic, Mirjana
2018-01-01
Visual-motor integration (VMI) skills, defined as the coordination of fine motor and visual perceptual abilities, are a very good indicator of a child's overall level of functioning. Research has clearly established that children with intellectual disability (ID) have deficits in VMI skills. This article presents a meta-analytic review of 10 research studies involving 652 children with mild ID for which a VMI skills assessment was also available. We measured the standardized mean difference (Hedges' g) between scores on VMI tests of these children with mild ID and either typically developing children's VMI test scores in these studies or normative mean values on VMI tests used by the studies. While mild ID is defined in part by intelligence scores that are two to three standard deviations below those of typically developing children, the standardized mean difference of VMI differences between typically developing children and children with mild ID in this meta-analysis was 1.75 (95% CI [1.11, 2.38]). Thus, the intellectual and adaptive skill deficits of children with mild ID may be greater (perhaps especially due to their abstract and conceptual reasoning deficits) than their relative VMI deficits. We discuss the possible meaning of this relative VMI strength among children with mild ID and suggest that their stronger VMI skills may be a target for intensive academic interventions as a means of attenuating problems in adaptive functioning.
Fang, Ying; Zhang, Ying
2017-01-01
Visual motor integration (VMI) is a vital ability in childhood development, which is associated with the performance of many functional skills. By using the Beery Developmental Test Package and Executive Function Tasks, the present study explored the VMI development and its factors (visual perception, motor coordination, and executive function) among 151 Chinese preschoolers from 4 to 6 years. Results indicated that the VMI skills of children increased quickly at 4 years and peaked at 5 years and decreased at around 5 to 6 years. Motor coordination and cognitive flexibility were related to the VMI development of children from 4 to 6 years. Visual perception was associated with the VMI development at early 4 years and inhibitory control was also associated with it among 4-year-old and the beginning of 5-year-old children. Working memory had no impact on the VMI. In conclusion, the development of VMI skills among children in preschool was not stable but changed dynamically in this study. Meanwhile the factors of the VMI worked in different age range for preschoolers. These findings may give some guidance to researchers or health professionals on improving children's VMI skills in their early childhood. PMID:29457030
Martin, Simon S; Wichmann, Julian L; Weyer, Hendrik; Albrecht, Moritz H; D'Angelo, Tommaso; Leithner, Doris; Lenga, Lukas; Booz, Christian; Scholtz, Jan-Erik; Bodelle, Boris; Vogl, Thomas J; Hammerstingl, Renate
2017-10-01
The aim of this study was to investigate the impact of noise-optimized virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with cutaneous malignant melanoma at thoracoabdominal dual-energy computed tomography (DECT). Seventy-six patients (48 men; 66.6±13.8years) with metastatic cutaneous malignant melanoma underwent DECT of the thorax and abdomen. Images were post-processed with standard linear blending (M_0.6), traditional virtual monoenergetic (VMI), and VMI+ technique. VMI and VMI+ images were reconstructed in 10-keV intervals from 40 to 100keV. Attenuation measurements were performed in cutaneous melanoma lesions, as well as in regional lymph node, subcutaneous and in-transit metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Five-point scales were used to evaluate overall image quality and lesion delineation by three radiologists with different levels of experience. Objective indices SNR and CNR were highest at 40-keV VMI+ series (5.6±2.6 and 12.4±3.4), significantly superior to all other reconstructions (all P<0.001). Qualitative image parameters showed highest values for 50-keV and 60-keV VMI+ reconstructions (median 5, respectively; P≤0.019) regarding overall image quality. Moreover, qualitative assessment of lesion delineation peaked in 40-keV VMI+ (median 5) and 50-keV VMI+ (median 4; P=0.055), significantly superior to all other reconstructions (all P<0.001). Low-keV noise-optimized VMI+ reconstructions substantially increase quantitative and qualitative image parameters, as well as subjective lesion delineation compared to standard image reconstruction and traditional VMI in patients with cutaneous malignant melanoma at thoracoabdominal DECT. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual-motor integration performance in children with severe specific language impairment.
Nicola, K; Watter, P
2016-09-01
This study investigated (1) the visual-motor integration (VMI) performance of children with severe specific language impairment (SLI), and any effect of age, gender, socio-economic status and concomitant speech impairment; and (2) the relationship between language and VMI performance. It is hypothesized that children with severe SLI would present with VMI problems irrespective of gender and socio-economic status; however, VMI deficits will be more pronounced in younger children and those with concomitant speech impairment. Furthermore, it is hypothesized that there will be a relationship between VMI and language performance, particularly in receptive scores. Children enrolled between 2000 and 2008 in a school dedicated to children with severe speech-language impairments were included, if they met the criteria for severe SLI with or without concomitant speech impairment which was verified by a government organization. Results from all initial standardized language and VMI assessments found during a retrospective review of chart files were included. The final study group included 100 children (males = 76), from 4 to 14 years of age with mean language scores at least 2SD below the mean. For VMI performance, 52% of the children scored below -1SD, with 25% of the total group scoring more than 1.5SD below the mean. Age, gender and the addition of a speech impairment did not impact on VMI performance; however, children living in disadvantaged suburbs scored significantly better than children residing in advantaged suburbs. Receptive language scores of the Clinical Evaluation of Language Fundamentals was the only score associated with and able to predict VMI performance. A small subgroup of children with severe SLI will also have poor VMI skills. The best predictor of poor VMI is receptive language scores on the Clinical Evaluation of Language Fundamentals. Children with poor receptive language performance may benefit from VMI assessment and multidisciplinary management. © 2016 John Wiley & Sons Ltd.
Lim, C Y; Tan, P C; Koh, C; Koh, E; Guo, H; Yusoff, N D; See, C Q; Tan, T
2015-03-01
Visual-motor integration (VMI) is important in children's development because it is associated with the performance of many functional skills. Deficits in VMI have been linked to difficulties in academic performance and functional tasks. Clinical assessment experience of occupational therapists in Singapore suggested that there is a potential difference between the VMI performance of Singaporean and American children. Cross-cultural studies also implied that culture has an influence on a child's VMI performance, as it shapes the activities that a child participates in. The purpose of this study was to (1) explore if there was a difference between the VMI performance of Singaporean and American preschoolers, and (2) determine if there were ethnic differences in the VMI performance of Singaporean preschoolers. The Beery-VMI, which was standardized in America, is commonly used by occupational therapists in Singapore to assess the VMI ability of children. We administered the Beery-VMI (fifth edition) full form test (excluding the supplemental tests) to 385 preschoolers (mean age = 63.3 months) from randomly selected schools in Singapore. We compared the scores of Singaporean preschoolers with those of the American standardization norms using the one-sample t-test. Scores of different ethnic groups among the Singapore population were also compared using a one-way anova, followed by the Bonferroni post-hoc test. Singaporean preschoolers and the standardization sample of American children performed significantly differently in all age groups (P < 0.05). Among the Singapore population, the scores were also significantly different (P < 0.05) between the (i) Chinese and Malay and (ii) Chinese and Indians ethnic groups. Preschoolers from different cultural and ethnic groups had different VMI performance. Certain cultural beliefs and practices may affect VMI performance. Clinicians should exercise caution when using an assessment in communities and cultures outside the ones on which it was standardized. © 2014 John Wiley & Sons Ltd.
Singh, Digar; Lee, Choong H
2018-01-01
Notwithstanding its mitosporic nature, an improbable morpho-transformation state i. e., sclerotial development (SD), is vaguely known in Aspergillus oryzae . Nevertheless an intriguing phenomenon governing mold's development and stress response, the effects of exogenous factors engendering SD, especially the volatile organic compounds (VOCs) mediated interactions (VMI) pervasive in microbial niches have largely remained unexplored. Herein, we examined the effects of intra-species VMI on SD in A. oryzae RIB 40, followed by comprehensive analyses of associated growth rates, pH alterations, biochemical phenotypes, and exometabolomes. We cultivated A. oryzae RIB 40 (S1 VMI : KACC 44967) opposite a non-SD partner strain, A. oryzae (S2: KCCM 60345), conditioning VMI in a specially designed "twin plate assembly." Notably, SD in S1 VMI was delayed relative to its non-conditioned control (S1) cultivated without partner strain (S2) in twin plate. Selectively evaluating A. oryzae RIB 40 (S1 VMI vs. S1) for altered phenotypes concomitant to SD, we observed a marked disparity for corresponding growth rates (S1 VMI < S1) 7days , media pH (S1 VMI > S1) 7days , and biochemical characteristics viz ., protease (S1 VMI > S1) 7days , amylase (S1 VMI > nS1) 3-7 days , and antioxidants (S1 VMI > S1) 7days levels. The partial least squares-discriminant analysis (PLS-DA) of gas chromatography-time of flight-mass spectrometry (GC-TOF-MS) datasets for primary metabolites exhibited a clustered pattern (PLS1, 22.04%; PLS2, 11.36%), with 7 days incubated S1 VMI extracts showed higher abundance of amino acids, sugars, and sugar alcohols with lower organic acids and fatty acids levels, relative to S1. Intriguingly, the higher amino acid and sugar alcohol levels were positively correlated with antioxidant activity, likely impeding SD in S1 VMI . Further, the PLS-DA (PLS1, 18.11%; PLS2, 15.02%) based on liquid chromatography-mass spectrometry (LC-MS) datasets exhibited a notable disparity for post-SD (9-11 days) sample extracts with higher oxylipins and 13-desoxypaxilline levels in S1 VMI relative to S1, intertwining Aspergillus morphogenesis and secondary metabolism. The analysis of VOCs for the 7 days incubated samples displayed considerably higher accumulation of C-8 compounds in the headspace of twin-plate experimental sets (S1 VMI :S2) compared to those in non-conditioned controls (S1 and S2-without respective partner strains), potentially triggering altered morpho-transformation and concurring biochemical as well as metabolic states in molds.
Singh, Digar; Lee, Choong H.
2018-01-01
Notwithstanding its mitosporic nature, an improbable morpho-transformation state i. e., sclerotial development (SD), is vaguely known in Aspergillus oryzae. Nevertheless an intriguing phenomenon governing mold's development and stress response, the effects of exogenous factors engendering SD, especially the volatile organic compounds (VOCs) mediated interactions (VMI) pervasive in microbial niches have largely remained unexplored. Herein, we examined the effects of intra-species VMI on SD in A. oryzae RIB 40, followed by comprehensive analyses of associated growth rates, pH alterations, biochemical phenotypes, and exometabolomes. We cultivated A. oryzae RIB 40 (S1VMI: KACC 44967) opposite a non-SD partner strain, A. oryzae (S2: KCCM 60345), conditioning VMI in a specially designed “twin plate assembly.” Notably, SD in S1VMI was delayed relative to its non-conditioned control (S1) cultivated without partner strain (S2) in twin plate. Selectively evaluating A. oryzae RIB 40 (S1VMI vs. S1) for altered phenotypes concomitant to SD, we observed a marked disparity for corresponding growth rates (S1VMI < S1)7days, media pH (S1VMI > S1)7days, and biochemical characteristics viz., protease (S1VMI > S1)7days, amylase (S1VMI > nS1)3–7days, and antioxidants (S1VMI > S1)7days levels. The partial least squares—discriminant analysis (PLS-DA) of gas chromatography—time of flight—mass spectrometry (GC-TOF-MS) datasets for primary metabolites exhibited a clustered pattern (PLS1, 22.04%; PLS2, 11.36%), with 7 days incubated S1VMI extracts showed higher abundance of amino acids, sugars, and sugar alcohols with lower organic acids and fatty acids levels, relative to S1. Intriguingly, the higher amino acid and sugar alcohol levels were positively correlated with antioxidant activity, likely impeding SD in S1VMI. Further, the PLS-DA (PLS1, 18.11%; PLS2, 15.02%) based on liquid chromatography—mass spectrometry (LC-MS) datasets exhibited a notable disparity for post-SD (9–11 days) sample extracts with higher oxylipins and 13-desoxypaxilline levels in S1VMI relative to S1, intertwining Aspergillus morphogenesis and secondary metabolism. The analysis of VOCs for the 7 days incubated samples displayed considerably higher accumulation of C-8 compounds in the headspace of twin-plate experimental sets (S1VMI:S2) compared to those in non-conditioned controls (S1 and S2—without respective partner strains), potentially triggering altered morpho-transformation and concurring biochemical as well as metabolic states in molds. PMID:29670599
Lenga, L; Czwikla, R; Wichmann, J L; Leithner, D; Albrecht, M H; D'Angelo, T; Arendt, C T; Booz, C; Hammerstingl, R; Vogl, T J; Martin, S S
2018-06-05
To investigate the impact of noise-optimised virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with malignant lymphoma at dual-energy computed tomography (DECT) examinations of the abdomen. Thirty-five consecutive patients (mean age, 53.8±18.6 years; range, 21-82 years) with histologically proven malignant lymphoma of the abdomen were included retrospectively. Images were post-processed with standard linear blending (M_0.6), traditional VMI, and VMI+ technique at energy levels ranging from 40 to 100 keV in 10 keV increments. Signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were objectively measured in lymphoma lesions. Image quality, lesion delineation, and image noise were rated subjectively by three blinded observers using five-point Likert scales. Quantitative image quality parameters peaked at 40-keV VMI+ (SNR, 15.77±7.74; CNR, 18.27±8.04) with significant differences compared to standard linearly blended M_0.6 (SNR, 7.96±3.26; CNR, 13.55±3.47) and all traditional VMI series (p<0.001). Qualitative image quality assessment revealed significantly superior ratings for image quality at 60-keV VMI+ (median, 5) in comparison with all other image series (p<0.001). Assessment of lesion delineation showed the highest rating scores for 40-keV VMI+ series (median, 5), while lowest subjective image noise was found for 100-keV VMI+ reconstructions (median, 5). Low-keV VMI+ reconstructions led to improved image quality and lesion delineation of malignant lymphoma lesions compared to standard image reconstruction and traditional VMI at abdominal DECT examinations. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Functional Role of Internal and External Visual Imagery: Preliminary Evidences from Pilates
Montuori, Simone; Sorrentino, Pierpaolo; Belloni, Lidia; Sorrentino, Giuseppe
2018-01-01
The present study investigates whether a functional difference between the visualization of a sequence of movements in the perspective of the first- (internal VMI-I) or third- (external VMI-E) person exists, which might be relevant to promote learning. By using a mental chronometry experimental paradigm, we have compared the time or execution, imagination in the VMI-I perspective, and imagination in the VMI-E perspective of two kinds of Pilates exercises. The analysis was carried out in individuals with different levels of competence (expert, novice, and no-practice individuals). Our results showed that in the Expert group, in the VMI-I perspective, the imagination time was similar to the execution time, while in the VMI-E perspective, the imagination time was significantly lower than the execution time. An opposite pattern was found in the Novice group, in which the time of imagination was similar to that of execution only in the VMI-E perspective, while in the VMI-I perspective, the time of imagination was significantly lower than the time of execution. In the control group, the times of both modalities of imagination were significantly lower than the execution time for each exercise. The present data suggest that, while the VMI-I serves to train an already internalised gesture, the VMI-E perspective could be useful to learn, and then improve, the recently acquired sequence of movements. Moreover, visual imagery is not useful for individuals that lack a specific motor experience. The present data offer new insights in the application of mental training techniques, especially in field of sports. However, further investigations are needed to better understand the functional role of internal and external visual imagery. PMID:29849565
Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph
2016-02-01
The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P < 0.001) higher than in the corresponding VMI data sets (SNR: 8.7 ± 4.1 and 10.8 ± 5.0; CNR: 8.0 ± 4.0 and 9.6 ± 4.9) and F_0.5 series (SNR: 10.7 ± 4.4; CNR: 8.3 ± 4.1). Subjective assessment of attenuation was highest in the 40- and 50-keV VMI and VMI+ image series (range, 4.84-4.91), superior to F_0.5 (4.07; P < 0.001). Corresponding subjective noise assessment was superior for 50-keV VMI+ (4.71; all P < 0.001) compared with VMI (2.60) and F_0.5 (4.11). Sensitivity and specificity for detection of 50% or greater stenoses were highest in VMI+ reconstructions (92% and 95%, respectively), significantly higher compared with standard F_0.5 (87% and 90%; both P ≤ 0.02). Image reconstruction using low-kiloelectron volt VMI+ improves image quality and diagnostic accuracy compared with traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.
Viral MicroRNAs Identified in Human Dental Pulp.
Zhong, Sheng; Naqvi, Afsar; Bair, Eric; Nares, Salvador; Khan, Asma A
2017-01-01
MicroRNAs (miRs) are a family of noncoding RNAs that regulate gene expression. They are ubiquitous among multicellular eukaryotes and are also encoded by some viruses. Upon infection, viral miRs (vmiRs) can potentially target gene expression in the host and alter the immune response. Although prior studies have reported viral infections in human pulp, the role of vmiRs in pulpal disease is yet to be explored. The purpose of this study was to examine the expression of vmiRs in normal and diseased pulps and to identify potential target genes. Total RNA was extracted and quantified from normal and inflamed human pulps (N = 28). Expression profiles of vmiRs were then interrogated using miRNA microarrays (V3) and the miRNA Complete Labeling and Hyb Kit (Agilent Technologies, Santa Clara, CA). To identify vmiRs that were differentially expressed, we applied a permutation test. Of the 12 vmiRs detected in the pulp, 4 vmiRs (including those from herpesvirus and human cytomegalovirus) were differentially expressed in inflamed pulp compared with normal pulp (P < .05). Using bioinformatics, we identified potential target genes for the differentially expressed vmiRs. They included key mediators involved in the detection of microbial ligands, chemotaxis, proteolysis, cytokines, and signal transduction molecules. These data suggest that miRs may play a role in interspecies regulation of pulpal health and disease. Further research is needed to elucidate the mechanisms by which vmiRs can potentially modulate the host response in pulpal disease. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Boison, Joe; Lee, Stephen; Gedir, Ron
2009-01-01
A liquid chromatographic-mass spectrometric (LC-MS) method was developed and validated for the determination and confirmation of virginiamycin (VMY) M1 residues in porcine liver, kidney, and muscle tissues at concentrations > or =2 ng/g. Porcine liver, kidney, or muscle tissue is homogenized with methanol-acetonitrile. After centrifugation, the supernatant is diluted with phosphate buffer and cleaned up on a C18 solid-phase extraction cartridge. VMY in the eluate is partitioned into chloroform and the aqueous upper layer is removed by aspiration. After evaporating the chloroform in the residual mixture to dryness, the dried extract is reconstituted in mobile phase and VMY is quantified by LC-MS. Any samples eliciting quantifiable levels of VMY M1 (i.e., at concentrations > or =2 ng/g) are subjected to confirmatory analysis by LC-MSIMS. VMY S1, a minor component of the VMY complex, is monitored but not quantified or confirmed.
NASA Astrophysics Data System (ADS)
Xu, Tongyi; Liang, Ming; Li, Chuan; Yang, Shuai
2015-10-01
A two-terminal mass (TTM) based vibration absorber with variable moment of inertia (VMI) for passive vehicle suspension is proposed. The VMI of the system is achieved by the motion of sliders embedded in a hydraulic driven flywheel. The moment of inertia increases in reaction to strong vertical vehicle oscillations and decreases for weak vertical oscillations. The hydraulic mechanism of the system converts the relative linear motion between the two terminals of the suspension into rotating motion of the flywheel. In the case of stronger vehicle vertical oscillation, the sliders inside the flywheel move away from the center of the flywheel because of the centrifugal force, hence yielding higher moment of inertia. The opposite is true in the case of weaker vehicle oscillation. As such, the moment of inertia adjusts itself adaptively in response to the road conditions. The performance of the proposed TTM-VMI absorber has been analyzed via dynamics modeling and simulation and further examined by experiments. In comparison to its counterpart with constant moment of inertia, the proposed VMI system offers faster response, better road handling and safety, improved ride comfort, and reduced suspension deflection except in the case of sinusoidal excitations.
A multi-plate velocity-map imaging design for high-resolution photoelectron spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kregel, Steven J.; Thurston, Glen K.; Zhou, Jia
A velocity map imaging (VMI) setup consisting of multiple electrodes with three adjustable voltage parameters, designed for slow electron velocity map imaging applications, is presented. The motivations for this design are discussed in terms of parameters that influence the VMI resolution and functionality. Particularly, this VMI has two tunable potentials used to adjust for optimal focus, yielding good VMI focus across a relatively large energy range. It also allows for larger interaction volumes without significant sacrifice to the resolution via a smaller electric gradient at the interaction region. All the electrodes in this VMI have the same dimensions for practicalitymore » and flexibility, allowing for relatively easy modifications to suit different experimental needs. We have coupled this VMI to a cryogenic ion trap mass spectrometer that has a flexible source design. The performance is demonstrated with the photoelectron spectra of S- and CS 2 -. The latter has a long vibrational progression in the ground state, and the temperature dependence of the vibronic features is probed by changing the temperature of the ion trap.« less
A multi-plate velocity-map imaging design for high-resolution photoelectron spectroscopy
Kregel, Steven J.; Thurston, Glen K.; Zhou, Jia; ...
2017-09-01
A velocity map imaging (VMI) setup consisting of multiple electrodes with three adjustable voltage parameters, designed for slow electron velocity map imaging applications, is presented. The motivations for this design are discussed in terms of parameters that influence the VMI resolution and functionality. Particularly, this VMI has two tunable potentials used to adjust for optimal focus, yielding good VMI focus across a relatively large energy range. It also allows for larger interaction volumes without significant sacrifice to the resolution via a smaller electric gradient at the interaction region. All the electrodes in this VMI have the same dimensions for practicalitymore » and flexibility, allowing for relatively easy modifications to suit different experimental needs. We have coupled this VMI to a cryogenic ion trap mass spectrometer that has a flexible source design. The performance is demonstrated with the photoelectron spectra of S- and CS 2 -. The latter has a long vibrational progression in the ground state, and the temperature dependence of the vibronic features is probed by changing the temperature of the ion trap.« less
Coutinho, Franzina; Bosisio, Marie-Elaine; Brown, Emma; Rishikof, Stephanie; Skaf, Elise; Zhang, Xiaoting; Perlman, Cynthia; Kelly, Shannon; Freedin, Erin; Dahan-Oliel, Noemi
2017-05-01
The aim of this randomized controlled trial was to assess the effectiveness of interventions using iPad applications compared to traditional occupational therapy on visual-motor integration (VMI) in school-aged children with poor VMI skills. Twenty children aged 4y0m to 7y11m with poor VMI skills were randomly assigned to the experimental group (interventions using iPad apps targeting VMI skills) or control group (traditional occupational therapy intervention sessions targeting VMI skills). The intervention phase consisted of two 40-min sessions per week, over a period of 10 weeks. Participants were required to attend a minimum of 8 and a maximum of 12 sessions. The subjects were tested using the Beery-VMI and the visual-motor subscale of the M-FUN, at baseline and follow-up. Results from a 2-way mixed design ANOVA yielded significant results for the main effect of time for the M-FUN total raw score, as well as in the subscales Amazing Mazes, Hidden Forks, Go Fishing and VM Behavior. However, gains did not differ between intervention types over time. No significant results were found for the Beery-VMI. This study supports the need for further research into the use of iPads for the development of VMI skills in the pediatric population. Implications for Rehabilitation This is the first study to look at the use of iPads with school-aged children with poor visual-motor skills. There is limited literature related to the use of iPads in pediatric occupational therapy, while they are increasingly being used in practice. When compared to the traditional occupational therapy interventions, participants in the iPad intervention appeared to be more interested, engaged and motivated to participate in the therapy sessions. Using iPad apps as an adjunct to therapy in intervention could be effective in improving VMI skills over time.
A Motor-Skills Programme to Enhance Visual Motor Integration of Selected Pre-School Learners
ERIC Educational Resources Information Center
Africa, Eileen K.; van Deventer, Karel J.
2017-01-01
Pre-schoolers are in a window period for motor skill development. Visual-motor integration (VMI) is the foundation for academic and sport skills. Therefore, it must develop before formal schooling. This study attempted to improve VMI skills. VMI skills were measured with the "Beery-Buktenica developmental test of visual-motor integration 6th…
Ringer, Lymor; Sirajuddin, Paul; Heckler, Mary; Ghosh, Anup; Suprynowicz, Frank; Yenugonda, Venkata M; Brown, Milton L; Toretsky, Jeffrey A; Uren, Aykut; Lee, YiChien; MacDonald, Tobey J; Rodriguez, Olga; Glazer, Robert I; Schlegel, Richard
2011-01-01
Medulloblastoma is the most prevalent of childhood brain malignancies, constituting 25% of childhood brain tumors. Craniospinal radiotherapy is a standard of care, followed by a 12 mo regimen of multi-agent chemotherapy. For children less than 3 y of age, irradiation is avoided due to its destructive effects on the developing nervous system. Long-term prognosis is worst for these youngest children and more effective treatment strategies with a better therapeutic index are needed. VMY-1-103, a novel dansylated analog of purvalanol B, was previously shown to inhibit cell cycle progression and proliferation in prostate and breast cancer cells more effectively than purvalanol B. In the current study, we have identified new mechanisms of action by which VMY-1-103 affected cellular proliferation in medulloblastoma cells. VMY-1-103, but not purvalanol B, significantly decreased the proportion of cells in s phase and increased the proportion of cells in G2/M. VMY-1-103 increased the sub G1 fraction of apoptotic cells, induced paRp and caspase-3 cleavage and increased the levels of the Death Receptors DR4 and DR5, Bax and Bad while decreasing the number of viable cells, all supporting apoptosis as a mechanism of cell death. p21CIp1/WaF1 levels were greatly suppressed. Importantly, we found that while both VMY and flavopiridol inhibited intracellular CDK1 catalytic activity, VMY-1-103 was unique in its ability to severely disrupt the mitotic spindle apparatus, significantly delaying metaphase and disrupting mitosis. Our data suggest that VMY-1-103 possesses unique antiproliferative capabilities and that this compound may form the basis of a new candidate drug to treat medulloblastoma. PMID:21885916
Ji, Xiuling; Zhang, Chunjing; Fang, Yuan; Zhang, Qi; Lin, Lianbing; Tang, Bing; Wei, Yunlin
2015-02-01
As a unique ecological system with low temperature and low nutrient levels, glaciers are considered a "living fossil" for the research of evolution. In this work, a lytic cold-active bacteriophage designated VMY22 against Bacillus cereus MYB41-22 was isolated from Mingyong Glacier in China, and its characteristics were studied. Electron microscopy revealed that VMY22 has an icosahedral head (59.2 nm in length, 31.9 nm in width) and a tail (43.2 nm in length). Bacteriophage VMY22 was classified as a Podoviridae with an approximate genome size of 18 to 20 kb. A one-step growth curve revealed that the latent and the burst periods were 70 and 70 min, respectively, with an average burst size of 78 bacteriophage particles per infected cell. The pH and thermal stability of bacteriophage VMY22 were also investigated. The maximum stability of the bacteriophage was observed to be at pH 8.0 and it was comparatively stable at pH 5.0-9.0. As VMY22 is a cold-active bacteriophage with low production temperature, its characterization and the relationship between MYB41-22 and Bacillus cereus bacteriophage deserve further study.
Balsamo, Lyn M; Sint, Kyaw J; Neglia, Joseph P; Brouwers, Pim; Kadan-Lottick, Nina S
2016-04-01
Assess the association between fine motor (FM) and visual-motor integration (VMI) skills and academic achievement in pediatric acute lymphoblastic leukemia (ALL) survivors. In this 28-site cross-sectional study of 256 children in first remission, a mean of 8.9 ± 2.2 years after treatment for standard-risk precursor-B ALL, validated measures of FM, VMI, reading, math, and intelligence were administered at mean follow-up age of 12.8 ± 2.5 years. VMI was significantly associated with written math calculation ability (p < .0069) after adjusting for intelligence (p < .0001). VMI was more strongly associated with math in those with lower intelligence (p = .0141). Word decoding was also significantly associated with VMI but with no effect modification by intelligence. FM skills were not associated with either reading or math achievement. These findings suggest that VMI is associated with aspects of math and reading achievement in leukemia survivors. These skills may be amenable to intervention. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Tse, Linda F. L.; Siu, Andrew M. H.; Li-Tsang, Cecilia W. P.
2017-01-01
Visual-motor integration (VMI) is the ability to coordinate visual perception and motor skills. Although Chinese children have superior performance in VMI than U.S. norms, there is limited information regarding the performance of its basic composition of VMI in regard to visual and motor aspects. This study aimed to examine the differences in…
A mathematical model for the virus medical imaging technique
NASA Astrophysics Data System (ADS)
Fioranelli, Massimo; Sepehri, Alireza
In this paper, we introduce a mathematical model for the virus medical imaging (VMI). In this method, first, by proposing a mathematical model, we show that there are two types of viruses that each of them produce one type of signal. Some of these signals can be received by males and others by females. Then, we will show that in the VMI technique, viruses can communicate with cells, interior to human’s body via two ways. (1) Viruses can form a wire that passes the skin and reaches to a special cell. (2) Viruses can communicate with viruses interior to body in the wireless form and send some signals for controlling evolutions of cells interior to human’s body.
Moskowitz, Beverly; Paoletti, Andrew; Brusilovskiy, Eugene; Zylstra, Sheryl Eckberg; Murray, Tammy
2015-01-01
We determined whether a widely used assessment of visual–motor skills, the Beery–Buktenica Developmental Test of Visual–Motor Integration (VMI), is appropriate for use as an outcome measure for handwriting interventions. A two-group pretest–posttest design was used with 207 kindergarten, first-grade, and second-grade students. Two well-established handwriting measures and the VMI were administered pre- and postintervention. The intervention group participated in the Size Matters Handwriting Program for 40 sessions, and the control group received standard instruction. Paired and independent-samples t tests were used to analyze group differences. The intervention group demonstrated significant improvements on the handwriting measures, with change scores having mostly large effect sizes. We found no significant difference in change scores on the VMI, t(202) = 1.19, p = .23. Results of this study suggest that the VMI may not detect changes in handwriting related to occupational therapy intervention. PMID:26114468
Pfeiffer, Beth; Moskowitz, Beverly; Paoletti, Andrew; Brusilovskiy, Eugene; Zylstra, Sheryl Eckberg; Murray, Tammy
2015-01-01
We determined whether a widely used assessment of visual-motor skills, the Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI), is appropriate for use as an outcome measure for handwriting interventions. A two-group pretest-posttest design was used with 207 kindergarten, first-grade, and second-grade students. Two well-established handwriting measures and the VMI were administered pre- and postintervention. The intervention group participated in the Size Matters Handwriting Program for 40 sessions, and the control group received standard instruction. Paired and independent-samples t tests were used to analyze group differences. The intervention group demonstrated significant improvements on the handwriting measures, with change scores having mostly large effect sizes. We found no significant difference in change scores on the VMI, t(202)=1.19, p=.23. Results of this study suggest that the VMI may not detect changes in handwriting related to occupational therapy intervention. Copyright © 2015 by the American Occupational Therapy Association, Inc.
ERIC Educational Resources Information Center
Nye, Barbara A.
Data from a statewide screening of Tennessee Head Start children on the Developmental Test of Visual-Motor Integration (VMI) are analyzed in this report for two purposes: to determine whether sex, race, and residence have a significant influence on visual motor development as measured by the VMI, and to develop VMI norms for the Tennessee Head…
Inter-Rater and Test-Retest Reliability of the Beery VMI in Schoolchildren
Harvey, Erin M.; Leonard-Green, Tina K.; Mohan, Kathleen M.; Kulp, Marjean Taylor; Davis, Amy L.; Miller, Joseph M.; Twelker, J. Daniel; Campus, Irene; Dennis, Leslie K.
2017-01-01
Purpose To assess inter-rater and test-retest reliability of the 6th Edition Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI) and test-retest reliability of the VMI Visual Perception Supplemental Test (VMIp) in school-age children. Methods Subjects were 163 Native American 3rd – 8th grade students with no significant refractive error (astigmatism < 1.00 D, myopia: < 0.75 D, hyperopia: < 2.50 D, anisometropia < 1.50 D) or ocular abnormalities. The VMI and VMIp were administered twice, on separate days. All VMI tests were scored by two trained scorers and a subset of 50 tests were also scored by an experienced scorer. Scorers strictly applied objective scoring criteria. Analyses included inter-rater and test-retest assessments of bias, 95% limits of agreement, and intraclass correlation analysis. Results Trained scorers had no significant scoring bias compared to the experienced scorer. One of the two trained scorers tended to provide higher scores than the other (mean difference in standardized scores = 1.54). Inter-rater correlations were strong (0.75 to 0.88). VMI and VMIp test-retest comparisons indicated no significant bias (subjects did not tend to score better on retest). Test-retest correlations were moderate (0.54 to 0.58). The 95% LOAs for the VMI were −24.14 to 24.67 (scorer 1) and −26.06 to 26.58 (scorer 2) and the 95% LOAs for the VMIp were −27.11 to 27.34. Conclusions The 95% LOA for test-retest differences will be useful for determining if the VMI and VMIp have sufficient sensitivity for detecting change with treatment in both clinical and research settings. Further research on test-retest reliability reporting 95% LOAs for children across different age ranges are recommended, particularly if the test is to be used to detect changes due to intervention or treatment. PMID:28422801
ERIC Educational Resources Information Center
Emam, Mahmoud Mohamed; Kazem, Ali Mahdi
2016-01-01
Visual motor integration (VMI) is the ability of the eyes and hands to work together in smooth, efficient patterns. In Oman, there are few effective methods to assess VMI skills in children in inclusive settings. The current study investigated the performance of preschool and early school years responders and non-responders on a VMI test. The full…
Yenugonda, Venkata Mahidhar; Ghosh, Anup; Divito, Kyle; Trabosh, Valerie; Patel, Yesha; Brophy, Amanda; Grindrod, Scott; Lisanti, Michael P; Rosenthal, Dean; Brown, Milton L; Avantaggiati, Maria Laura; Rodriguez, Olga
2010-01-01
The 2,6,9-trisubstituted purine group of cyclin dependent kinase inhibitors have the potential to be clinically relevant inhibitors of cancer cell proliferation. We have recently designed and synthesized a novel dansylated analog of purvalanol B, termed VMY-1-103, that inhibited cell cycle progression in breast cancer cell lines more effectively than did purvalanol B and allowed for uptake analyses by fluorescence microscopy. ErbB-2 plays an important role in the regulation of signal transduction cascades in a number of epithelial tumors, including prostate cancer (PCa). Our previous studies demonstrated that transgenic expression of activated ErbB-2 in the mouse prostate initiated PCa and either the overexpression of ErbB-2 or the addition of the ErbB-2/ErbB-3 ligand, heregulin (HRG), induced cell cycle progression in the androgen-responsive prostate cancer cell line, LNCaP. In the present study, we tested the efficacy of VMY-1-103 in inhibiting HRG-induced cell proliferation in LNCaP prostate cancer cells. At concentrations as low as 1 µM, VMY-1-103 increased both the proportion of cells in G1 and p21CIP1 protein levels. At higher concentrations (5 µM or 10 µM), VMY-1-103 induced apoptosis via decreased mitochondrial membrane polarity and induction of p53 phosphorylation, caspase-3 activity and PARP cleavage. Treatment with 10 µM Purvalanol B failed to either influence proliferation or induce apoptosis. Our results demonstrate that VMY-1-103 was more effective in inducing apoptosis in PCa cells than its parent compound, purvalanol B, and support the testing of VMY-1-103 as a potential small molecule inhibitor of prostate cancer in vivo. PMID:20574155
Ohira, Shingo; Kanayama, Naoyuki; Wada, Kentaro; Karino, Tsukasa; Nitta, Yuya; Ueda, Yoshihiro; Miyazaki, Masayoshi; Koizumi, Masahiko; Teshima, Teruki
2018-04-02
The objective of this study was to assess the accuracy of the quantitative measurements obtained using dual-energy computed tomography with metal artifact reduction software (MARS). Dual-energy computed tomography scans (fast kV-switching) are performed on a phantom, by varying the number of metal rods (Ti and Pb) and reference iodine materials. Objective and subjective image analyses are performed on retroreconstructed virtual monochromatic images (VMIs) (VMI at 70 keV). The maximum artifact indices for VMI-Ti and VMI-Pb (5 metal rods) with MARS (without MARS) were 17.4 (166.7) and 34.6 (810.6), respectively; MARS significantly improved the mean subjective 5-point score (P < 0.05). The maximum differences between the measured Hounsfield unit and theoretical values for 5 mg/mL iodine and 2-mm core rods were -42.2% and -68.5%, for VMI-Ti and VMI-Pb (5 metal rods), respectively, and the corresponding differences in the iodine concentration were -64.7% and -73.0%, respectively. Metal artifact reduction software improved the objective and subjective image quality; however, the quantitative values were underestimated.
Doney, Robyn; Lucas, Barbara R; Watkins, Rochelle E; Tsang, Tracey W; Sauer, Kay; Howat, Peter; Latimer, Jane; Fitzpatrick, James P; Oscar, June; Carter, Maureen; Elliott, Elizabeth J
2016-08-01
Visual-motor integration (VMI) skills are essential for successful academic performance, but to date no studies have assessed these skills in a population-based cohort of Australian Aboriginal children who, like many children in other remote, disadvantaged communities, consistently underperform academically. Furthermore, many children in remote areas of Australia have prenatal alcohol exposure (PAE) and Fetal Alcohol Spectrum Disorder (FASD), which are often associated with VMI deficits. VMI, visual perception, and fine motor coordination were assessed using The Beery-Buktenica Developmental Test of Visual-Motor Integration, including its associated subtests of Visual Perception and Fine Motor Coordination, in a cohort of predominantly Australian Aboriginal children (7.5-9.6 years, n=108) in remote Western Australia to explore whether PAE adversely affected test performance. Cohort results were reported, and comparisons made between children i) without PAE; ii) with PAE (no FASD); and iii) FASD. The prevalence of moderate (≤16th percentile) and severe (≤2nd percentile) impairment was established. Mean VMI scores were 'below average' (M=87.8±9.6), and visual perception scores were 'average' (M=97.6±12.5), with no differences between groups. Few children had severe VMI impairment (1.9%), but moderate impairment rates were high (47.2%). Children with FASD had significantly lower fine motor coordination scores and higher moderate impairment rates (M=87.9±12.5; 66.7%) than children without PAE (M=95.1±10.7; 23.3%) and PAE (no FASD) (M=96.1±10.9; 15.4%). Aboriginal children living in remote Western Australia have poor VMI skills regardless of PAE or FASD. Children with FASD additionally had fine motor coordination problems. VMI and fine motor coordination should be assessed in children with PAE, and included in FASD diagnostic assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Band head spin assignment of superdeformed bands in 133Pr using two-parameter formulae
NASA Astrophysics Data System (ADS)
Sharma, Honey; Mittal, H. M.
2018-03-01
The two-parameter formulae viz. the power index formula, the nuclear softness formula and the VMI model are adopted to accredit the band head spin (I0) of four superdeformed rotational bands in 133Pr. The technique of least square fitting is used to accredit the band head spin for four superdeformed rotational bands in 133Pr. The root mean deviation among the computed transition energies and well-known experimental transition energies are attained by extracting the model parameters from the two-parameter formulae. The determined transition energies are in excellent agreement with the experimental transition energies, whenever exact spins are accredited. The power index formula coincides well with the experimental data and provides minimum root mean deviation. So, the power index formula is more efficient tool than the nuclear softness formula and the VMI model. The deviation of dynamic moment of inertia J(2) against the rotational frequency is also examined.
Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.
2018-01-01
Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370
Padilla, Nelly; Forsman, Lea; Broström, Lina; Hellgren, Kerstin; Åden, Ulrika
2018-01-01
Objectives This exploratory study aimed to investigate associations between neonatal brain volumes and visual–motor integration (VMI) and fine motor skills in children born extremely preterm (EPT) when they reached 6½ years of age. Setting Prospective population-based cohort study in Stockholm, Sweden, during 3 years. Participants All children born before gestational age, 27 weeks, during 2004–2007 in Stockholm, without major morbidities and impairments, and who underwent MRI at term-equivalent age. Main outcome measures Brain volumes were calculated using morphometric analyses in regions known to be involved in VMI and fine motor functions. VMI was assessed with The Beery-Buktenica Developmental Test of Visual–Motor Integration—sixth edition and fine motor skills were assessed with the manual dexterity subtest from the Movement Assessment Battery for Children—second edition, at 6½ years. Associations between the brain volumes and VMI and fine motor skills were evaluated using partial correlation, adjusted for total cerebral parenchyma and sex. Results Out of 107 children born at gestational age <27 weeks, 83 were assessed at 6½ years and 66/83 were without major brain lesions or cerebral palsy and included in the analyses. A representative subsample underwent morphometric analyses: automatic segmentation (n=34) and atlas-based segmentation (n=26). The precentral gyrus was associated with both VMI (r=0.54, P=0.007) and fine motor skills (r=0.54, P=0.01). Associations were also seen between fine motor skills and the volume of the cerebellum (r=0.42, P=0.02), brainstem (r=0.47, P=0.008) and grey matter (r=−0.38, P=0.04). Conclusions Neonatal brain volumes in areas known to be involved in VMI and fine motor skills were associated with scores for these two functions when children born EPT without major brain lesions or cerebral palsy were evaluated at 6½ years of age. Establishing clear associations between early brain volume alterations and later VMI and/or fine motor skills could make early interventions possible. PMID:29455171
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
The integrated model for solving the single-period deterministic inventory routing problem
NASA Astrophysics Data System (ADS)
Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik
2016-08-01
This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.
All Male State-Funded Military Academies: Anachronism or Necessary Anomaly?
ERIC Educational Resources Information Center
Russo, Charles J.; Scollay, Susan J.
1993-01-01
The United States Court of Appeals for the Fourth District, although stopping short of ordering the Virginia Military Institute (VMI) to admit women, ordered VMI to implement a program which comports with the requirements of equal protection. Offers an analysis of the Fourth Circuit's ruling, a discussion of important educational questions, and a…
Imai, Yuko; Itsuki, Kyohei; Okamura, Yasushi; Inoue, Ryuji; Mori, Masayuki X
2012-01-01
Activation of transient receptor potential (TRP) canonical TRPC3/C6/C7 channels by diacylglycerol (DAG) upon stimulation of phospholipase C (PLC)-coupled receptors results in the breakdown of phosphoinositides (PIPs). The critical importance of PIPs to various ion-transporting molecules is well documented, but their function in relation to TRPC3/C6/C7 channels remains controversial. By using an ectopic voltage-sensing PIP phosphatase (DrVSP), we found that dephosphorylation of PIPs robustly inhibits currents induced by carbachol (CCh), 1-oleolyl-2-acetyl-sn-glycerol (OAG) or RHC80267 in TRPC3, TRPC6 and TRPC7 channels, though the strength of the DrVSP-mediated inhibition (VMI) varied among the channels with a rank order of C7 > C6 > C3. Pharmacological and molecular interventions suggest that depletion of phosphatidylinositol 4,5-bisphosphate (PI(4,5)P2) is most likely the critical event for VMI in all three channels. When the PLC catalytic signal was vigorously activated through overexpression of the muscarinic type-I receptor (M1R), the inactivation of macroscopic TRPC currents was greatly accelerated in the same rank order as the VMI, and VMI of these currents was attenuated or lost. VMI was also rarely detected in vasopressin-induced TRPC6-like currents in A7r5 vascular smooth muscle cells, indicating that the inactivation by PI(4,5)P2 depletion underlies the physiological condition. Simultaneous fluorescence resonance energy transfer (FRET)-based measurement of PI(4,5)P2 levels and TRPC6 currents confirmed that VMI magnitude reflects the degree of PI(4,5)P2 depletion. These results demonstrate that TRPC3/C6/C7 channels are differentially regulated by depletion of PI(4,5)P2, and that the bimodal signal produced by PLC activation controls these channels in a self-limiting manner. PMID:22183723
Bolk, Jenny; Padilla, Nelly; Forsman, Lea; Broström, Lina; Hellgren, Kerstin; Åden, Ulrika
2018-02-17
This exploratory study aimed to investigate associations between neonatal brain volumes and visual-motor integration (VMI) and fine motor skills in children born extremely preterm (EPT) when they reached 6½ years of age. Prospective population-based cohort study in Stockholm, Sweden, during 3 years. All children born before gestational age, 27 weeks, during 2004-2007 in Stockholm, without major morbidities and impairments, and who underwent MRI at term-equivalent age. Brain volumes were calculated using morphometric analyses in regions known to be involved in VMI and fine motor functions. VMI was assessed with The Beery-Buktenica Developmental Test of Visual-Motor Integration-sixth edition and fine motor skills were assessed with the manual dexterity subtest from the Movement Assessment Battery for Children-second edition, at 6½ years. Associations between the brain volumes and VMI and fine motor skills were evaluated using partial correlation, adjusted for total cerebral parenchyma and sex. Out of 107 children born at gestational age <27 weeks, 83 were assessed at 6½ years and 66/83 were without major brain lesions or cerebral palsy and included in the analyses. A representative subsample underwent morphometric analyses: automatic segmentation (n=34) and atlas-based segmentation (n=26). The precentral gyrus was associated with both VMI (r=0.54, P=0.007) and fine motor skills (r=0.54, P=0.01). Associations were also seen between fine motor skills and the volume of the cerebellum (r=0.42, P=0.02), brainstem (r=0.47, P=0.008) and grey matter (r=-0.38, P=0.04). Neonatal brain volumes in areas known to be involved in VMI and fine motor skills were associated with scores for these two functions when children born EPT without major brain lesions or cerebral palsy were evaluated at 6½ years of age. Establishing clear associations between early brain volume alterations and later VMI and/or fine motor skills could make early interventions possible. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
VMI-VI and BG-II KOPPITZ-2 for Youth with HFASDs and Typical Youth
ERIC Educational Resources Information Center
McDonald, Christin A.; Volker, Martin A.; Lopata, Christopher; Toomey, Jennifer A.; Thomeer, Marcus L.; Lee, Gloria K.; Lipinski, Alanna M.; Dua, Elissa H.; Schiavo, Audrey M.; Bain, Fabienne; Nelson, Andrew T.
2014-01-01
The visual-motor skills of 90 youth with high-functioning autism spectrum disorders (HFASDs) and 51 typically developing (TD) youth were assessed using the Beery-Buktenica Developmental Test of Visual-Motor Integration, Sixth Edition (VMI-VI) and Koppitz Developmental Scoring System for the Bender-Gestalt Test-Second Edition (KOPPITZ-2).…
ERIC Educational Resources Information Center
Schooler, Douglas L.; Anderson, Robert L.
1979-01-01
Analyzes preschoolers' scores on the Developmental Test of Visual Motor Integration (VMI), the Slosson Intelligence Test (SIT), and the ABC Inventory (ABCI). Separate ANOVAs reveal no race effect on the VMI. Race differences favoring Whites are found for SIT and ABCI. There were no effects for sex on any measure. (Author)
Soft X-ray spectroscopy of nanoparticles by velocity map imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostko, O.; Xu, B.; Jacobs, M. I.
Velocity map imaging (VMI), a technique traditionally used to study chemical dynamics in the gas phase, is applied to study X-ray photoemission from aerosol nanoparticles. Soft X-rays from the Advanced Light Source synchrotron, probe a beam of nanoparticles, and the resulting photoelectrons are velocity mapped to obtain their kinetic energy distributions. A new design of the VMI spectrometer is described. The spectrometer is benchmarked by measuring vacuum ultraviolet photoemission from gas phase xenon and squalene nanoparticles followed by measurements using soft X-rays. It is demonstrated that the photoelectron distribution from X-ray irradiated squalene nanoparticles is dominated by secondary electrons. Bymore » scanning the photon energies and measuring the intensities of these secondary electrons, a near edge X-ray absorption fine structure (NEXAFS) spectrum is obtained. The NEXAFS technique is used to obtain spectra of aqueous nanoparticles at the oxygen K edge. By varying the position of the aqueous nanoparticle beam relative to the incident X-ray beam, evidence is presented such that the VMI technique allows for NEXAFS spectroscopy of water in different physical states. Finally, we discuss the possibility of applying VMI methods to probe liquids and solids via X-ray spectroscopy.« less
Final report on APMP.T-K7.1 key comparison of water triple point cells, bilateral NMIJ-VMI
NASA Astrophysics Data System (ADS)
Yamazawa, Kazuaki; Nakano, Tohru; Thanh Binh, Pham
2018-01-01
APMP.T-K7.1, was held from July 2014 to May 2015 to compare the national realizations of the water triple point between NMIJ (Japan) and VMI (Vietnam). To reach the objective, VMI sent a transfer cell to NMIJ and stated a value for the temperature difference of the transfer cell, relative to the corresponding national standard, representing 273.16 K. This report presents the results of the TPW comparison, gives detailed information about the measurements made at the NMIJ and at the VMI, and aims to link the results of APMP.T-K7.1 to APMP.T-K7 and CCT-K7. The results of this key comparison are also represented in the form of degrees of equivalence for the purposes of the MRA. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Soft X-ray spectroscopy of nanoparticles by velocity map imaging
Kostko, O.; Xu, B.; Jacobs, M. I.; ...
2017-05-05
Velocity map imaging (VMI), a technique traditionally used to study chemical dynamics in the gas phase, is applied to study X-ray photoemission from aerosol nanoparticles. Soft X-rays from the Advanced Light Source synchrotron, probe a beam of nanoparticles, and the resulting photoelectrons are velocity mapped to obtain their kinetic energy distributions. A new design of the VMI spectrometer is described. The spectrometer is benchmarked by measuring vacuum ultraviolet photoemission from gas phase xenon and squalene nanoparticles followed by measurements using soft X-rays. It is demonstrated that the photoelectron distribution from X-ray irradiated squalene nanoparticles is dominated by secondary electrons. Bymore » scanning the photon energies and measuring the intensities of these secondary electrons, a near edge X-ray absorption fine structure (NEXAFS) spectrum is obtained. The NEXAFS technique is used to obtain spectra of aqueous nanoparticles at the oxygen K edge. By varying the position of the aqueous nanoparticle beam relative to the incident X-ray beam, evidence is presented such that the VMI technique allows for NEXAFS spectroscopy of water in different physical states. Finally, we discuss the possibility of applying VMI methods to probe liquids and solids via X-ray spectroscopy.« less
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.; ...
2014-12-11
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shivaram, Niranjan; Champenois, Elio G.; Cryan, James P.
We demonstrate a technique in velocity map imaging (VMI) that allows spatial gating of the laser focal overlap region in time resolved pump-probe experiments. This significantly enhances signal-to-noise ratio by eliminating background signal arising outside the region of spatial overlap of pump and probe beams. This enhancement is achieved by tilting the laser beams with respect to the surface of the VMI electrodes which creates a gradient in flight time for particles born at different points along the beam. By suitably pulsing our microchannel plate detector, we can select particles born only where the laser beams overlap. Furthermore, this spatialmore » gating in velocity map imaging can benefit nearly all photo-ion pump-probe VMI experiments especially when extreme-ultraviolet light or X-rays are involved which produce large background signals on their own.« less
Predicting Handwriting Legibility in Taiwanese Elementary School Children.
Lee, Tzu-I; Howe, Tsu-Hsin; Chen, Hao-Ling; Wang, Tien-Ni
This study investigates handwriting characteristics and potential predictors of handwriting legibility among typically developing elementary school children in Taiwan. Predictors of handwriting legibility included visual-motor integration (VMI), visual perception (VP), eye-hand coordination (EHC), and biomechanical characteristics of handwriting. A total of 118 children were recruited from an elementary school in Taipei, Taiwan. A computerized program then assessed their handwriting legibility. The biomechanics of handwriting were assessed using a digitizing writing tablet. The children's VMI, VP, and EHC were assessed using the Beery-Buktenica Developmental Test of Visual-Motor Integration. Results indicated that predictive factors of handwriting legibility varied in different age groups. VMI predicted handwriting legibility for first-grade students, and EHC and stroke force predicted handwriting legibility for second-grade students. Kinematic factors such as stroke velocity were the only predictor for children in fifth and sixth grades. Copyright © 2016 by the American Occupational Therapy Association, Inc.
Shivaram, Niranjan; Champenois, Elio G.; Cryan, James P.; ...
2016-12-19
We demonstrate a technique in velocity map imaging (VMI) that allows spatial gating of the laser focal overlap region in time resolved pump-probe experiments. This significantly enhances signal-to-noise ratio by eliminating background signal arising outside the region of spatial overlap of pump and probe beams. This enhancement is achieved by tilting the laser beams with respect to the surface of the VMI electrodes which creates a gradient in flight time for particles born at different points along the beam. By suitably pulsing our microchannel plate detector, we can select particles born only where the laser beams overlap. Furthermore, this spatialmore » gating in velocity map imaging can benefit nearly all photo-ion pump-probe VMI experiments especially when extreme-ultraviolet light or X-rays are involved which produce large background signals on their own.« less
Thomas, Alyssa R; Lacadie, Cheryl; Vohr, Betty; Ment, Laura R; Scheinost, Dustin
2017-01-01
Adolescents born preterm (PT) with no evidence of neonatal brain injury are at risk of deficits in visual memory and fine motor skills that diminish academic performance. The association between these deficits and white matter microstructure is relatively unexplored. We studied 190 PTs with no brain injury and 92 term controls at age 16 years. The Rey-Osterrieth Complex Figure Test (ROCF), the Beery visual-motor integration (VMI), and the Grooved Pegboard Test (GPT) were collected for all participants, while a subset (40 PTs and 40 terms) underwent diffusion-weighted magnetic resonance imaging. PTs performed more poorly than terms on ROCF, VMI, and GPT (all P < 0.01). Mediation analysis showed fine motor skill (GPT score) significantly mediates group difference in ROCF and VMI (all P < 0.001). PTs showed a negative correlation (P < 0.05, corrected) between fractional anisotropy (FA) in the bilateral middle cerebellar peduncles and GPT score, with higher FA correlating to lower (faster task completion) GPT scores, and between FA in the right superior cerebellar peduncle and ROCF scores. PTs also had a positive correlation (P < 0.05, corrected) between VMI and left middle cerebellar peduncle FA. Novel strategies to target fine motor skills and the cerebellum may help PTs reach their full academic potential. © The Author 2017. Published by Oxford University Press.
A novel approach for inventory problem in the pharmaceutical supply chain.
Candan, Gökçe; Yazgan, Harun Reşit
2016-02-24
In pharmaceutical enterprises, keeping up with global market conditions is possible with properly selected supply chain management policies. Generally; demand-driven classical supply chain model is used in the pharmaceutical industry. In this study, a new mathematical model is developed to solve an inventory problem in the pharmaceutical supply chain. Unlike the studies in literature, the "shelf life and product transition times" constraints are considered, simultaneously, first time in the pharmaceutical production inventory problem. The problem is formulated as a mixed-integer linear programming (MILP) model with a hybrid time representation. The objective is to maximize total net profit. Effectiveness of the proposed model is illustrated considering a classical and a vendor managed inventory (VMI) supply chain on an experimental study. To show the effectiveness of the model, an experimental study is performed; which contains 2 different supply chain policy (Classical and VMI), 24 and 30 months planning horizon, 10 and 15 different cephalosporin products. Finally the mathematical model is compared to another model in literature and the results show that proposed model is superior. This study suggest a novel approach for solving pharmaceutical inventory problem. The developed model is maximizing total net profit while determining optimal production plan under shelf life and product transition constraints in the pharmaceutical industry. And we believe that the proposed model is much more closed to real life unlike the other studies in literature.
Women at VMI and the Citadel: History Reenacted.
ERIC Educational Resources Information Center
Goree, Cathryn T.
1997-01-01
Presents a historical process model for the full integration of women into a male institution based on historical studies of several institutions. Draws analogies to current decisions at Virginia Military Institute and the Citadel, including predictions about ways in which the presence of women will affect the student life of these institutions.…
Nagayama, Yasunori; Nakaura, Takeshi; Oda, Seitaro; Utsunomiya, Daisuke; Funama, Yoshinori; Iyama, Yuji; Taguchi, Narumi; Namimoto, Tomohiro; Yuki, Hideaki; Kidoh, Masafumi; Hirata, Kenichiro; Nakagawa, Masataka; Yamashita, Yasuyuki
2018-04-01
To evaluate the image quality and lesion conspicuity of virtual-monochromatic-imaging (VMI) with dual-layer DECT (DL-DECT) for reduced-iodine-load multiphasic-hepatic CT. Forty-five adults with renal dysfunction who had undergone hepatic DL-DECT with 300-mgI/kg were included. VMI (40-70-keV, DL-DECT-VMI) was generated at each enhancement phase. As controls, 45 matched patients undergoing standard 120-kVp protocol (120-kVp, 600-mgI/kg, and iterative reconstruction) were included. We compared the size-specific dose estimate (SSDE), image noise, CT attenuation, and contrast-to-noise ratio (CNR) between protocols. Two radiologists scored the image quality and lesion conspicuity. SSDE was significantly lower in DL-DECT group (p < 0.01). Image noise of DL-DECT-VMI was almost constant at each keV (differences of ≤15%) and equivalent to or lower than of 120-kVp. As the energy decreased, CT attenuation and CNR gradually increased; the values of 55-60 keV images were almost equivalent to those of standard 120-kVp. The highest scores for overall quality and lesion conspicuity were assigned at 40-keV followed by 45 to 55-keV, all of which were similar to or better than of 120-kVp. For multiphasic-hepatic CT with 50% iodine-load, DL-DECT-VMI at 40- to 55-keV provides equivalent or better image quality and lesion conspicuity without increasing radiation dose compared with standard 120-kVp protocol. • 40-55-keV yields optimal image quality for half-iodine-load multiphasic-hepatic CT with DL-DECT. • DL-DECT protocol decreases radiation exposure compared with 120-kVp scans with iterative reconstruction. • 40-keV images maximise conspicuity of hepatocellular carcinoma especially at hepatic-arterial phase.
Visuomotor Integration and Inhibitory Control Compensate for Each Other in School Readiness
ERIC Educational Resources Information Center
Cameron, Claire E.; Brock, Laura L.; Hatfield, Bridget E.; Cottone, Elizabeth A.; Rubinstein, Elise; LoCasale-Crouch, Jennifer; Grissmer, David W.
2015-01-01
Visuomotor integration (VMI), or the ability to copy designs, and 2 measures of executive function were examined in a predominantly low-income, typically developing sample of children (n = 467, mean age 4.2 years) from 5 U.S. states. In regression models controlling for age and demographic variables, we tested the interaction between visuomotor…
Aspects of birth history and outcome in diplegics attending specialised educational facilities.
Bischof, Faith; Rothberg, Alan; Ratcliffe, Ingrid
2012-03-21
We aimed to study functional mobility and visual performance in spastic diplegic children and adolescents attending specialised schools. Spastic diplegia (SD) was confirmed by clinical examination. Birth and related history were added to explore relationships between SD, birth weight (BW) and duration of pregnancy. Place of birth, BW, gestational age (GA) and length of hospital stay were obtained by means of parental recall. Outcome measures included the functional mobility scale (FMS) and Beery tests of visuomotor integration (VMI) and visual perception (VIS). Forty participants were included (age 7 years 5 months - 19 years 6 months). Term and preterm births were almost equally represented. Functional mobility assessments showed that 20 were walking independently in school and community settings and the remainder used walking aids or wheelchairs. There were no significant correlations between BW or GA and outcomes (FMS, VIS-Z scores or VMI-Z scores) and Z scores were low. VIS scores correlated significantly with chronological age (p=0.024). There were also significant correlations between VIS and VMI scores and school grade appropriateness (p=0.004;p=0.027 respectively). Both term and preterm births were represented, and outcomes were similar regardless of GA. VIS and VMI were affected in both groups. Half of the group used assistive mobility devices and three-fifths were delayed in terms of their educational level. These problems require specialised teaching strategies, appropriate resources and a school environment that caters for mobility limitations.
Thai Elephant-Assisted Therapy Programme in Children with Down Syndrome.
Satiansukpong, Nuntanee; Pongsaksri, Maethisa; Sasat, Daranee
2016-06-01
The objectives of this study were to examine the effects of the Thai Elephant-Assisted Therapy Programme for children with Down syndrome (DS) (TETP-D) on balance, postural control and visual motor integration (VMI). A quasi-experimental design with blind control was used. Sixteen children with DS from grades 1 to 6, in a Thailand, public school were recruited for this study. The participants were divided voluntarily into two groups: control and experimental. These both groups received regular school activities, but the experimental group had added treatment, which consisted of TETP-D twice a week for 2 months. The balance subtest of the Bruininks-Oseretsky Test of Motor Proficiency 2, the postural control record form and Beery VMI were applied as outcome measure 1 week before and after the TETP-D. The results showed no significant difference in balance or postural control. However, a significant difference of VMI was shown between the two groups (z = 13.5, p = .04). Children with DS benefited from the TETP-D as it improved their VMI. The TETP-D could improve balance and postural control if provided within a suitable frequency and duration. Further research is needed to test this hypothesis. The limitations of this study are the significant differences in some aspects of the groups at pre-test such as gender and supine flexion of postural control. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Yiming; Lee, Sean; Stephens, Joni; Mateo, Luis R; Zhang, Yun Po; DeVizio, William
2012-02-01
To investigate whether the long-term use (6 months) of an arginine-calcium carbonate-MFP toothpaste would affect calculus formation and/or gingivitis when compared to a calcium carbonate-MFP toothpaste. This was a double-blind clinical study. Eligible adult subjects (120) entered a 2-month pre-test phase of the study. After receiving an evaluation of oral tissue and a dental prophylaxis, the subjects were provided with a regular fluoride toothpaste, a soft-bristled adult toothbrush with instructions to brush their teeth for 1-minute twice daily (morning and evening) for 2 months. The subjects were then examined for baseline calculus using the Volpe-Manhold Calculus Index (VMI) and gingivitis using the Löe-Silness Gingival Index (GI), along with an oral tissue examination. Qualifying subjects were randomized to two treatment groups: (1) Colgate Sensitive Pro-Relief toothpaste containing 8.0% arginine, 1450 ppm MFP and calcium carbonate (Test group), or (2) Colgate Cavity Protection toothpaste containing 1450 ppm MFP and calcium carbonate (Control group). Subjects were stratified by the VMI score and gender. After a dental prophylaxis (VMI=0), the subjects entered a 6-month test phase. Each received the assigned toothpaste and a soft-bristled adult toothbrush for home use with instructions of brushing teeth for 1 minute twice daily (morning and evening). The examinations of VMI, Löe-Silness GI and oral tissues were conducted after 3 and 6 months. Prior to each study visit, subjects refrained from brushing their teeth as well as eating and drinking for 4 hours. 99 subjects complied with the study protocol and completed the 6-month test phase. No within-treatment comparison was performed for the VMI because it was brought down to zero after the prophylaxis at the baseline of the test phase. For the Löe-Silness GI, subjects of the Test group exhibited a significant difference from baseline at the 3- and 6-month examinations. The 3-month Löe-Silness GI of the Control group was significantly different from that of the baseline; however, its 6-month Löe-Silness GI was not statistically significantly different from the baseline values. After 3 and 6 months, there were no significant differences between the Test and Control groups with respect to the mean VMI scores; there were no statistically significant differences between the two groups with respect to the Löe-Silness GI results after 3 and 6 months of product use.
Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.
Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce
2017-10-01
Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.
Jessup, Ashley B; Grimley, Mary Beth; Meyer, Echo; Passmore, Gregory P; Belger, Ayşenil; Hoffman, William H; Çalıkoğlu, Ali S
2015-09-01
To evaluate the effects of diabetic ketoacidosis (DKA) on neurocognitive functions in children and adolescents presenting with new-onset type 1 diabetes. Newly diagnosed patients were divided into two groups: those with DKA and those without DKA (non-DKA). Following metabolic stabilization, the patients took a mini-mental status exam prior to undergoing a baseline battery of cognitive tests that evaluated visual and verbal cognitive tasks. Follow-up testing was performed 8-12 weeks after diagnosis. Patients completed an IQ test at follow-up. There was no statistical difference between the DKA and non-DKA groups neither in alertness at baseline testing nor in an IQ test at follow-up. The DKA group had significantly lower baseline scores than the non-DKA group for the visual cognitive tasks of design recognition, design memory and the composite visual memory index (VMI). At follow-up, Design Recognition remained statistically lower in the DKA group, but the design memory and the VMI tasks returned to statistical parity between the two groups. No significant differences were found in verbal cognitive tasks at baseline or follow-up between the two groups. Direct correlations were present for the admission CO2 and the visual cognitive tasks of VMI, design memory and design recognition. Direct correlations were also present for admission pH and VMI, design memory and picture memory. Pediatric patients presenting with newly diagnosed type 1 diabetes and severe but uncomplicated DKA showed a definite trend for lower cognitive functioning when compared to the age-matched patients without DKA.
Inventory Control System by Using Vendor Managed Inventory (VMI)
NASA Astrophysics Data System (ADS)
Sabila, Alzena Dona; Mustafid; Suryono
2018-02-01
The inventory control system has a strategic role for the business in managing inventory operations. Management of conventional inventory creates problems in the stock of goods that often runs into vacancies and excess goods at the retail level. This study aims to build inventory control system that can maintain the stability of goods availability at the retail level. The implementation of Vendor Managed Inventory (VMI) method on inventory control system provides transparency of sales data and inventory of goods at retailer level to supplier. Inventory control is performed by calculating safety stock and reorder point of goods based on sales data received by the system. Rule-based reasoning is provided on the system to facilitate the monitoring of inventory status information, thereby helping the process of inventory updates appropriately. Utilization of SMS technology is also considered as a medium of collecting sales data in real-time due to the ease of use. The results of this study indicate that inventory control using VMI ensures the availability of goods ± 70% and can reduce the accumulation of goods ± 30% at the retail level.
NASA Astrophysics Data System (ADS)
Liedberg, Hans
2012-01-01
The Comité Consulatif de Thermométrie (CCT) has organized several key comparisons to compare realizations of the ITS-90 in different National Metrology Institutes. To keep the organization, time scale and data processing of such a comparison manageable, the number of participants in a CCT key comparison (CCT KC) is limited to a few laboratories in each major economic region. Subsequent regional key comparisons are linked to the applicable CCT KC by two or more linking laboratories. For the temperature range from 83.8058 K (triple point of Ar) to 933.473 K (freezing point of Al), a key comparison, CCT-K3, was carried out from 1997 to 2001 among representative laboratories in North America, Europe and Asia. Following CCT-K3, the Asia-Pacific Metrology Program Key Comparison 3 (APMP.T-K3) was organized for National Metrology Institutes in the Asia/Pacific region. NMIA (Australia) and KRISS (South Korea) provided the link between CCT-K3 and APMP.T-K3. APMP.T-K3, which took place from February 2000 to June 2003, covered the temperature range from -38.8344 °C (triple point of Hg) to 419.527 °C (freezing point of Zn), using a standard platinum resistance thermometer (SPRT) as the artefact. In June 2007 the Vietnam Metrology Institute (VMI) requested a bilateral comparison to link their SPRT calibration capabilities to APMP.T-K3, and in October 2007 the National Metrology Institute of South Africa (NMISA) agreed to provide the link to APMP.T-K3. Like APMP.T-K3, the comparison was restricted to the Hg to Zn temperature range to reduce the chance of drift in the SPRT artefact. The comparison was carried out in a participant-pilot-participant topology (with NMISA as the pilot and VMI as the participant). VMI's results in the comparison were linked to the Average Reference Values of CCT-K3 via NMISA's results in APMP.T-K3. The resistance ratios measured by VMI and NMISA at Zn, Sn, Ga and Hg fixed points agree within their combined uncertainties, and VMI's results also agree with the CCT-K3 reference values at these fixed points. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Shah, Reshma P.; Spruyt, Karen; Kragie, Brigette C.; Greeley, Siri Atma W.; Msall, Michael E.
2012-01-01
OBJECTIVE To assess performance on an age-standardized neuromotor coordination task among sulfonylurea-treated KCNJ11-related neonatal diabetic patients. RESEARCH DESIGN AND METHODS Nineteen children carrying KCNJ11 mutations associated with isolated diabetes (R201H; n = 8), diabetes with neurodevelopmental impairment (V59M or V59A [V59M/A]; n = 8), or diabetes not consistently associated with neurodevelopmental disability (Y330C, E322K, or R201C; n = 3) were studied using the age-standardized Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI). RESULTS Although R201H subjects tested in the normal range (median standard score = 107), children with V59M/A mutations had significantly lower than expected VMI standard scores (median = 49). The scores for all three groups were significantly different from each other (P = 0.0017). The age of sulfonylurea initiation was inversely correlated with VMI scores in the V59M/A group (P < 0.05). CONCLUSIONS Neurodevelopmental disability in KCNJ11-related diabetes includes visuomotor problems that may be ameliorated by early sulfonylurea treatment. Comprehensive longitudinal assessment on larger samples will be imperative. PMID:22855734
Visual perceptual and handwriting skills in children with Developmental Coordination Disorder.
Prunty, Mellissa; Barnett, Anna L; Wilmut, Kate; Plumb, Mandy
2016-10-01
Children with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD. The performance of twenty-eight 8-14year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined. The DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity. Clinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
May, Matthias Stefan; Bruegel, Joscha; Brand, Michael; Wiesmueller, Marco; Krauss, Bernhard; Allmendinger, Thomas; Uder, Michael; Wuest, Wolfgang
2017-09-01
The aim of this study was to intra-individually compare the image quality obtained by dual-source, dual-energy (DSDE) computed tomography (CT) examinations and different virtual monoenergetic reconstructions to a low single-energy (SE) scan. Third-generation DSDE-CT was performed in 49 patients with histologically proven malignant disease of the head and neck region. Weighted average images (WAIs) and virtual monoenergetic images (VMIs) for low (40 and 60 keV) and high (120 and 190 keV) energies were reconstructed. A second scan aligned to the jaw, covering the oral cavity, was performed for every patient to reduce artifacts caused by dental hardware using a SE-CT protocol with 70-kV tube voltages and matching radiation dose settings. Objective image quality was evaluated by calculating contrast-to-noise ratios. Subjective image quality was evaluated by experienced radiologists. Highest contrast-to-noise ratios for vessel and tumor attenuation were obtained in 40-keV VMI (all P < 0.05). Comparable objective results were found in 60-keV VMI, WAI, and the 70-kV SE examinations. Overall subjective image quality was also highest for 40-keV, but differences to 60-keV VMI, WAI, and 70-kV SE were nonsignificant (all P > 0.05). High kiloelectron volt VMIs reduce metal artifacts with only limited diagnostic impact because of insufficiency in case of severe dental hardware. CTDIvol did not differ significantly between both examination protocols (DSDE: 18.6 mGy; 70-kV SE: 19.4 mGy; P = 0.10). High overall image quality for tumor delineation in head and neck imaging were obtained with 40-keV VMI. However, 70-kV SE examinations are an alternative and modified projections aligned to the jaw are recommended in case of severe artifacts caused by dental hardware.
Esposito, Maria; Ruberto, Maria; Gimigliano, Francesca; Marotta, Rosa; Gallai, Beatrice; Parisi, Lucia; Lavano, Serena Marianna; Roccella, Michele; Carotenuto, Marco
2013-01-01
Background Migraine without aura (MoA) is a painful syndrome, particularly in childhood; it is often accompanied by severe impairments, including emotional dysfunction, absenteeism from school, and poor academic performance, as well as issues relating to poor cognitive function, sleep habits, and motor coordination. Materials and methods The study population consisted of 71 patients affected by MoA (32 females, 39 males) (mean age: 9.13±1.94 years); the control group consisted of 93 normally developing children (44 females, 49 males) (mean age: 8.97±2.03 years) recruited in the Campania school region. The entire population underwent a clinical evaluation to assess total intelligence quotient level, visual-motor integration (VMI) skills, and motor coordination performance, the later using the Movement Assessment Battery for Children (M-ABC). Children underwent training using the Wii-balance board and Nintendo Wii Fit Plus™ software (Nintendo Co, Ltd, Kyoto, Japan); training lasted for 12 weeks and consisted of three 30-minute sessions per week at their home. Results The two starting populations (MoA and controls) were not significantly different for age (P=0.899) and sex (P=0.611). M-ABC and VMI performances at baseline (T0) were significantly different in dexterity, balance, and total score for M-ABC (P<0.001) and visual (P=0.003) and motor (P<0.001) tasks for VMI. After 3 months of Wii training (T1), MoA children showed a significant improvement in M-ABC global performance (P<0.001), M-ABC dexterity (P<0.001), M-ABC balance (P<0.001), and VMI motor task (P<0.001). Conclusion Our study reported the positive effects of the Nintendo Wii Fit Plus™ system as a rehabilitative device for the visuomotor and balance skills impairments among children affected by MoA, even if further research and longer follow-up are needed. PMID:24453490
Esposito, Maria; Ruberto, Maria; Gimigliano, Francesca; Marotta, Rosa; Gallai, Beatrice; Parisi, Lucia; Lavano, Serena Marianna; Roccella, Michele; Carotenuto, Marco
2013-01-01
Migraine without aura (MoA) is a painful syndrome, particularly in childhood; it is often accompanied by severe impairments, including emotional dysfunction, absenteeism from school, and poor academic performance, as well as issues relating to poor cognitive function, sleep habits, and motor coordination. The study population consisted of 71 patients affected by MoA (32 females, 39 males) (mean age: 9.13±1.94 years); the control group consisted of 93 normally developing children (44 females, 49 males) (mean age: 8.97±2.03 years) recruited in the Campania school region. The entire population underwent a clinical evaluation to assess total intelligence quotient level, visual-motor integration (VMI) skills, and motor coordination performance, the later using the Movement Assessment Battery for Children (M-ABC). Children underwent training using the Wii-balance board and Nintendo Wii Fit Plus™ software (Nintendo Co, Ltd, Kyoto, Japan); training lasted for 12 weeks and consisted of three 30-minute sessions per week at their home. The two starting populations (MoA and controls) were not significantly different for age (P=0.899) and sex (P=0.611). M-ABC and VMI performances at baseline (T0) were significantly different in dexterity, balance, and total score for M-ABC (P<0.001) and visual (P=0.003) and motor (P<0.001) tasks for VMI. After 3 months of Wii training (T1), MoA children showed a significant improvement in M-ABC global performance (P<0.001), M-ABC dexterity (P<0.001), M-ABC balance (P<0.001), and VMI motor task (P<0.001). Our study reported the positive effects of the Nintendo Wii Fit Plus™ system as a rehabilitative device for the visuomotor and balance skills impairments among children affected by MoA, even if further research and longer follow-up are needed.
Pienaar, A E; Barhorst, R; Twisk, J W R
2014-05-01
Perceptual-motor skills contribute to a variety of basic learning skills associated with normal academic success. This study aimed to determine the relationship between academic performance and perceptual-motor skills in first grade South African learners and whether low SES (socio-economic status) school type plays a role in such a relationship. This cross-sectional study of the baseline measurements of the NW-CHILD longitudinal study included a stratified random sample of first grade learners (n = 812; 418 boys and 394 boys), with a mean age of 6.78 years ± 0.49 living in the North West Province (NW) of South Africa. The Beery-Buktenica Developmental Test of Visual-Motor Integration-4 (VMI) was used to assess visual-motor integration, visual perception and hand control while the Bruininks Oseretsky Test of Motor Proficiency, short form (BOT2-SF) assessed overall motor proficiency. Academic performance in math, reading and writing was assessed with the Mastery of Basic Learning Areas Questionnaire. Linear mixed models analysis was performed with spss to determine possible differences between the different VMI and BOT2-SF standard scores in different math, reading and writing mastery categories ranging from no mastery to outstanding mastery. A multinomial multilevel logistic regression analysis was performed to assess the relationship between a clustered score of academic performance and the different determinants. A strong relationship was established between academic performance and VMI, visual perception, hand control and motor proficiency with a significant relationship between a clustered academic performance score, visual-motor integration and visual perception. A negative association was established between low SES school types on academic performance, with a common perceptual motor foundation shared by all basic learning areas. Visual-motor integration, visual perception, hand control and motor proficiency are closely related to basic academic skills required in the first formal school year, especially among learners in low SES type schools. © 2013 John Wiley & Sons Ltd.
Optimizing national immunization program supply chain management in Thailand: an economic analysis.
Riewpaiboon, A; Sooksriwong, C; Chaiyakunapruk, N; Tharmaphornpilas, P; Techathawat, S; Rookkapan, K; Rasdjarmrearnsook, A; Suraratdecha, C
2015-07-01
This study aimed to conduct an economic analysis of the transition of the conventional vaccine supply and logistics systems to the vendor managed inventory (VMI) system in Thailand. Cost analysis of health care program. An ingredients based approach was used to design the survey and collect data for an economic analysis of the immunization supply and logistics systems covering procurement, storage and distribution of vaccines from the central level to the lowest level of vaccine administration facility. Costs were presented in 2010 US dollar. The total cost of the vaccination program including cost of vaccine procured and logistics under the conventional system was US$0.60 per packed volume procured (cm(3)) and US$1.35 per dose procured compared to US$0.66 per packed volume procured (cm(3)) and US$1.43 per dose procured under the VMI system. However, the findings revealed that the transition to the VMI system and outsourcing of the supply chain system reduced the cost of immunization program at US$6.6 million per year because of reduction of un-opened vaccine wastage. The findings demonstrated that the new supply chain system would result in efficiency improvement and potential savings to the immunization program compared to the conventional system. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Farhan, Hesso; Reiterer, Veronika; Kriz, Alexander; Hauri, Hans-Peter; Pavelka, Margit; Sitte, Harald H.; Freissmuth, Michael
2015-01-01
Summary The C-terminus of GABA transporter 1 (GAT1, SLC6A1) is required for trafficking of the protein through the secretory pathway to reach its final destination, i.e. the rim of the synaptic specialization. We identified a motif of three hydrophobic residues (569VMI571) that was required for export of GAT1 from the ER-Golgi intermediate compartment (ERGIC). This conclusion was based on the following observations: (i) GAT1-SSS, the mutant in which 569VMI571 was replaced by serine residues, was exported from the ER in a COPII-dependent manner but accumulated in punctate structures and failed to reach the Golgi; (ii) under appropriate conditions (imposing a block at 15°C, disruption of COPI), these structures also contained ERGIC53; (iii) the punctae were part of a dynamic compartment, because it was accessible to a second anterograde cargo [the temperature-sensitive variant of vesicular stomatitis virus G protein (VSV-G)] and because GAT1-SSS could be retrieved from the punctate structures by addition of a KKxx-based retrieval motif, which supported retrograde transport to the ER. To the best of our knowledge, the VMI-motif of GAT1 provides the first example of a cargo-based motif that specifies export from the ERGIC. PMID:18285449
Velocity map imaging using an in-vacuum pixel detector.
Gademann, Georg; Huismans, Ymkje; Gijsbertsen, Arjan; Jungmann, Julia; Visschers, Jan; Vrakking, Marc J J
2009-10-01
The use of a new type in-vacuum pixel detector in velocity map imaging (VMI) is introduced. The Medipix2 and Timepix semiconductor pixel detectors (256 x 256 square pixels, 55 x 55 microm2) are well suited for charged particle detection. They offer high resolution, low noise, and high quantum efficiency. The Medipix2 chip allows double energy discrimination by offering a low and a high energy threshold. The Timepix detector allows to record the incidence time of a particle with a temporal resolution of 10 ns and a dynamic range of 160 micros. Results of the first time application of the Medipix2 detector to VMI are presented, investigating the quantum efficiency as well as the possibility to operate at increased background pressure in the vacuum chamber.
Memisevic, Haris; Sinanovic, Osman
2013-12-01
The goal of this study was to assess the relationship between visual-motor integration and executive functions, and in particular, the extent to which executive functions can predict visual-motor integration skills in children with intellectual disability. The sample consisted of 90 children (54 boys, 36 girls; M age = 11.3 yr., SD = 2.7, range 7-15) with intellectual disabilities of various etiologies. The measure of executive functions were 8 subscales of the Behavioral Rating Inventory of Executive Function (BRIEF) consisting of Inhibition, Shifting, Emotional Control, Initiating, Working memory, Planning, Organization of material, and Monitoring. Visual-motor integration was measured with the Acadia test of visual-motor integration (VMI). Regression analysis revealed that BRIEF subscales explained 38% of the variance in VMI scores. Of all the BRIEF subscales, only two were statistically significant predictors of visual-motor integration: Working memory and Monitoring. Possible implications of this finding are further elaborated.
Anti-calculus activity of a toothpaste with microgranules.
Chesters, R K; O'Mullane, D M; Finnerty, A; Huntington, E; Jones, P R
1998-09-01
The objective of the trial was to determine the efficacy of the proven anticalculus active system (zinc citrate trihydrate [ZCT] and triclosan), when the ZCT is delivered from microgranules incorporated in a silica-based toothpaste containing 1450 ppm F as sodium fluoride. A monadic, single-blind, two phase design clinical trial was used to compare the effect of the test and a negative control fluoridated toothpaste on the formation of supragingival calculus. Male and female calculus-forming volunteers, aged 18 or over, were recruited for the study following a 2-week screening phase. All subjects were given a scale and polish of their eight lower anterior teeth at the start of both the pre-test and test phases. Subjects were supplied with a silica-based 1450 F ppm fluoridated toothpaste with no anti-calculus active for use during an 8-week pre-test phase. Calculus was assessed at the end of the pre-test and test phases using the Volpe-Manhold index (VMI). Subjects were stratified according to their pre-test VMI score (8-10, 10.5-12, > 12) and gender and then allocated at random to test or negative control toothpaste groups. Subjects with < 8 mm of calculus were excluded from further participation. The outcome variable was the mean VMI score for the test and negative control groups. The test toothpaste caused a statistically significant 30% reduction in calculus compared with the control paste after a 13-week use. No adverse events were reported during the study. The incorporation of the ZCT in microgranules did not adversely affect the anticalculus activity of the new formulation.
Motor performance in children with Noonan syndrome.
Croonen, Ellen A; Essink, Marlou; van der Burgt, Ineke; Draaisma, Jos M; Noordam, Cees; Nijhuis-van der Sanden, Maria W G
2017-09-01
Although problems with motor performance in daily life are frequently mentioned in Noonan syndrome, the motor performance profile has never been systematically investigated. The aim of this study was to examine whether a specific profile in motor performance in children with Noonan syndrome was seen using valid norm-referenced tests. The study assessed motor performance in 19 children with Noonan syndrome (12 females, mean age 9 years 4 months, range 6 years 1 month to 11 years and 11 months, SDS 1 year and 11 months). More than 60% of the parents of the children reported pain, decreased muscle strength, reduced endurance, and/or clumsiness in daily functioning. The mean standard scores on the Visual Motor Integration (VMI) test and Movement Assessment Battery for Children 2, Dutch version (MABC-2-NL) items differed significantly from the reference scores. Grip strength, muscle force, and 6 min Walking Test (6 MWT) walking distance were significantly lower, and the presence of generalized hypermobility was significantly higher. All MABC-2-NL scores (except manual dexterity) correlated significantly with almost all muscle strength tests, VMI total score, and VMI visual perception score. The 6 MWT was only significantly correlated to grip strength. This is the first study that confirms that motor performance, strength, and endurance are significantly impaired in children with Noonan syndrome. Decreased functional motor performance seems to be related to decreased visual perception and reduced muscle strength. Research on causal relationships and the effectiveness of interventions is needed. Physical and/or occupational therapy guidance should be considered to enhance participation in daily life. © 2017 Wiley Periodicals, Inc.
Making the invisible visible: improving conspicuity of noncalcified gallstones using dual-energy CT.
Uyeda, Jennifer W; Richardson, Ian J; Sodickson, Aaron D
2017-12-01
To determine whether virtual monochromatic imaging (VMI) increases detectability of noncalcified gallstones on dual-energy CT (DECT) compared with conventional CT imaging. This retrospective IRB-approved, HIPAA-compliant study included consecutive patients who underwent DECT of the abdomen in the Emergency Department during a 30-month period (July 1, 2013-December 31, 2015), with a comparison US or MR within 1-year. 51 patients (36F, 15M; mean age 52 years) fulfilled the inclusion criteria. All DECT were acquired on a dual-source 128 × 2 slice scanner using either 80/Sn140 or 100/Sn140 kVp pairs. Source images at high and low kVp were used for DE post-processing with VMI. Within 3 mm reconstructed images, regions of interest of 0.5 cm 2 were placed on noncalcified gallstones and bile to record hounsfield units (HU) at VMI energy levels ranging between 40 and 190 keV. Noncalcified gallstones uniformly demonstrated lowest HU at 40 keV and increase at higher keV; the HU of bile varied at higher keV. Few of the noncalcified stones are visible at 70 keV (simulating a conventional 120 kVp scan), with measured contrast (bile-stone HU difference) <10 HU in 78%, 10-20 HU in 20%, and >20 HU in 2%. Contrast was maximal at 40 keV, where 100% demonstrated >20 HU difference from surrounding bile, 75% >44 HU difference, and 50% >60 HU difference. A paired t test demonstrated a significant difference (p < 0.0001) between this stone-bile contrast at 40 vs. 70 keV and 70 vs. 190 keV. Low keV virtual monochromatic imaging increased conspicuity of noncalcified gallstones, improving their detectability.
Femtosecond Photoelectron Imaging of Dissociating and Autoionizing States in Oxygen
NASA Astrophysics Data System (ADS)
Plunkett, Alexander; Sandhu, Arvinder
2017-04-01
Time-resolved photoelectron spectra from molecular oxygen have been recorded with high energy and time resolution using a velocity map imaging (VMI) spectrometer. High harmonics were used to prepare neutral Rydberg states converging to the c4Σu- ionic state. These states display both autoionization and predissociation. A femtosecond laser pulse centered at 780 nm was used to probe the system, ionizing both the excited molecular states and the predissociated neutral atomic fragments. Electrons were collected in the 0-3 eV range using a VMI spectrometer and their spectra were reconstructed using a Fast Onion-peeling algorithm. By looking at IR modification to the electron spectrum, new features are observed which could originate from long-range columbic interactions or previously unobserved molecular decay channels. Ongoing studies extend this technique to other systems exhibiting non-adiabatic dynamics. This work was supported by the U. S. Army Research Laboratory and the U. S. Army Research Office under Grant No. W911NF-14-1-0383.
Single-shot velocity-map imaging of attosecond light-field control at kilohertz rate.
Süssmann, F; Zherebtsov, S; Plenge, J; Johnson, Nora G; Kübel, M; Sayler, A M; Mondes, V; Graf, C; Rühl, E; Paulus, G G; Schmischke, D; Swrschek, P; Kling, M F
2011-09-01
High-speed, single-shot velocity-map imaging (VMI) is combined with carrier-envelope phase (CEP) tagging by a single-shot stereographic above-threshold ionization (ATI) phase-meter. The experimental setup provides a versatile tool for angle-resolved studies of the attosecond control of electrons in atoms, molecules, and nanostructures. Single-shot VMI at kHz repetition rate is realized with a highly sensitive megapixel complementary metal-oxide semiconductor camera omitting the need for additional image intensifiers. The developed camera software allows for efficient background suppression and the storage of up to 1024 events for each image in real time. The approach is demonstrated by measuring the CEP-dependence of the electron emission from ATI of Xe in strong (≈10(13) W/cm(2)) near single-cycle (4 fs) laser fields. Efficient background signal suppression with the system is illustrated for the electron emission from SiO(2) nanospheres. © 2011 American Institute of Physics
U.S. Seeks Reversal to Let VMI Stay All Male.
ERIC Educational Resources Information Center
Jaschik, Scott
1995-01-01
The Clinton administration has asked the Supreme Court to force Virginia Military Institute, currently all male, to admit women rather than have the state create a similar leadership program for women at another institution. The case parallels litigation in South Carolina involving the Citadel. (MSE)
Evaluation of the LWVD Luminosity for Use in the Spectral-Based Volume Sensor Algorithms
2010-04-29
VMI Vibro-Meter, Inc. VS Volume Sensor VSCS Volume Sensor Communications Specification VSDS Volume Sensor Detection Suite VSNP Volume Sensor Nodal Panel...using the VSCS communications protocol. Appendix A gives a complete listing of the SBVS EVENT parameters and the EVENT algorithm descriptions. See
The Association between Graphomotor Tests and Participation of Typically Developing Young Children
ERIC Educational Resources Information Center
Rosenberg, Limor
2015-01-01
This study aimed to explore the association between graphomotor tests--VMI, ROCF, SWT--and the measures of a child's participation. Seventy-five typically developing children aged 4 to 9 years were individually evaluated using the graphomotor tests and their parents completed a participation questionnaire. After controlling for child's age, the…
Longitudinal evaluation of fine motor skills in children with leukemia.
Hockenberry, Marilyn; Krull, Kevin; Moore, Ki; Gregurich, Mary Ann; Casey, Marissa E; Kaemingk, Kris
2007-08-01
Improved survival for children with acute lymphocytic leukemia (ALL) has allowed investigators to focus on the adverse or side effects of treatment and to develop interventions that promote cure while decreasing the long-term effects of therapy. Although much attention has been given to the significant neurocognitive sequelae that can occur after ALL therapy, limited investigation is found addressing fine motor function in these children and motor function that may contribute to neurocognitive deficits in ALL survivors. Fine motor and sensory-perceptual performances were examined in 82 children with ALL within 6-months of diagnosis and annually for 2 years (year 1 and year 2, respectively) during therapy. Purdue Pegboard assessments indicated significant slowing of fine motor speed and dexterity for the dominant hand, nondominant hand, and both hands simultaneously for children in this study. Mean Visual-Motor Integration (VMI) scores for children with low-risk and high-risk ALL decreased from the first evaluation to year 1 and again at year 2. Mean VMI scores for children with standard risk ALL increased from the first evaluation to year 1 and then decreased at year 2. Significant positive correlations were found between the Purdue and the VMI at both year 1 and year 2, suggesting that the Pegboard performance consistently predicts the later decline in visual-motor integration. Significant correlations were found between the Purdue Pegboard at baseline and the Performance IQ during year 1, though less consistently during year 2. A similar pattern was also observed between the baseline Pegboard performance and performance on the Coding and Symbol Search subtests during year 1 and year 2. In this study, children with ALL experienced significant and persistent visual-motor problems throughout therapy. These problems continued during the first and second years of treatment. These basic processing skills are necessary to the development of higher-level cognitive abilities, including nonverbal intelligence and academic achievement, particularly in arithmetic and written language.
Farrell, S; Barker, M L; Gerlach, R W; Putt, M S; Milleman, J L
2009-01-01
This randomized controlled clinical trial was conducted to evaluate whether daily use of a hydrogen peroxide/ pyrophosphate-containing antitartar whitening strip might safely yield clinical reductions in post-prophylaxis calculus accumulation. A three-month, randomized controlled trial was conducted to compare calculus accumulation with a daily 6% hydrogen peroxide/pyrophosphate strip versus regular brushing. After an eight-week run-in phase to identify calculus formers, a prophylaxis was administered, and 77 subjects were randomly assigned to daily strip or brushing only groups. All subjects received an anticavity dentifrice (Crest Cavity Protection) and manual brush for use throughout the three-month study; for subjects assigned to the experimental group, strip application was once daily for five minutes on the facial and lingual surfaces of the mandibular teeth. Efficacy was measured as mm calculus (VMI) before prophylaxis and after six and 12 weeks of treatment, while safety was assessed from examination and interview. Subjects ranged in age from 21-87 years, with groups balanced (p > 0.26) on pertinent demographic and behavioral parameters, and pre-prophylaxis calculus baseline mean scores (16.0 mm). At Week 6, calculus accumulation was lower in the strip group, with adjusted mean (SE) lingual VMI of 12.0 (0.87) for the strip group and 17.0 (0.88) for the brushing control. At Week 12, calculus accumulation was lower in the strip group, with adjusted mean (SE) lingual VMI of 14.3 (0.85) for the strip group and 17.2 (0.86) for the brushing control. Treatments differed significantly (p < 0.02) on calculus accumulation at both time points. A total of three subjects (8%) in the strip group and two subjects (5%) in the brushing control had mild oral irritation or tooth sensitivity during treatment; no one discontinued early due to an adverse event. Daily use of hydrogen peroxide whitening strips with pyrophosphate reduced calculus formation by up to 29% versus regular brushing, without meaningful adverse events.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
... Corporate Park, 5675 N. Blackstock Rd., Spartanburg; Site 5 (118 acres)--Key Logistics, 101 Michelin Dr... Site 13 (318 acres)--VMI Logistics Park, Victor Hill Rd., Greer. Because the ASF only pertains to... Center, Brookshire Rd. and SC Hwy. 101, Greer; Site 3 (116 acres total)--Highway 290 Commerce Park, 201...
ERIC Educational Resources Information Center
University City School District, MO.
The development and content of the Early Education Screening Test Battery are described elsewhere (TM 000 184). This report provides norms for the Gross Motor Test (GMO), Visual-Motor Integration (VMI), four scales of the Illinois Test of Psycholinguistic Abilities (ITPA), Peabody Picture Vocabulary Test (PPVT), and the Behavior Rating Scale…
Visual, Motor, and Visual-Motor Integration Difficulties in Students with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Oliver, Kimberly
2013-01-01
Autism spectrum disorders (ASDs) affect 1 in every 88 U.S. children. ASDs have been described as neurological and developmental disorders impacting visual, motor, and visual-motor integration (VMI) abilities that affect academic achievement (CDC, 2010). Forty-five participants (22 ASD and 23 Typically Developing [TD]) 8 to 14 years old completed…
Variable Mixed Orbital Character in the Photoelectron Angular Distribution of NO_{2}
NASA Astrophysics Data System (ADS)
Laws, Benjamin A.; Cavanagh, Steven J.; Lewis, Brenton R.; Gibson, Stephen T.
2017-06-01
NO_{2} a key component of photochemical smog and an important species in the Earth's atmosphere, is an example of a molecule which exhibits significant mixed orbital character in the HOMO. In photoelectron experiments the geometric properties of the parent anion orbital are reflected in the photoelectron angular distribution (PAD), an area of research that has benefited largely from the ability of velocity-map imaging (VMI) to simultaneously record both the energetic and angular information, with 100% collection efficiency. Photoelectron spectra of NO_{2}^{-}, taken over a range of wavelengths (355nm-520nm) with the ANU's VMI spectrometer, reveal an anomalous jump in the anisotropy parameter near threshold. Consequently, the orbital behavior of NO_{2}^{-} appears to be quite different near threshold compared to detachment at higher photon energies. This surprising effect is due to the Wigner Threshold law, which causes p orbital character to dominate the photodetachment cross-section near threshold, before the mixed s/d orbital character becomes significant at higher electron kinetic energies. By extending recent work on binary character models to form a more general expression, the variable mixed orbital character of NO_{2}^{-} is able to be described. This study provides the first multi-wavelength NO_{2} anisotropy data, which is shown to be in decent agreement with much earlier zero-core model predictions of the anisotropy parameter. K. J. Reed, A. H. Zimmerman, H. C. Andersen, and J. I. Brauman, J. Chem. Phys. 64, 1368, (1976). doi:10.1063/1.432404 D. Khuseynov, C. C. Blackstone, L. M. Culberson, and A. Sanov, J. Chem. Phys. 141, 124312, (2014). doi:10.1063/1.4896241 W. B. Clodius, R. M. Stehman, and S. B. Woo, Phys. Rev. A. 28, 760, (1983). doi:10.1103/PhysRevA.28.760 Research supported by the Australian Research Council Discovery Project Grant DP160102585
Photodetachment Studies Of Atomic Negative Ions Through Velocity-Map Imaging Spectroscopy
NASA Astrophysics Data System (ADS)
Chartkunchand, Kiattichart
The technique of velocity-map imaging (VMI) spectroscopy as been adapted to a keV-level negative ion beamline for studies of photon-negative ion collisions. The design and operation of the VMI spectrometer takes into consideration the use of continuous, fast-moving (5 keV to 10 keV) ion beams, as well as a continuous wave (CW) laser as the source of photons. The VMI spectrometer has been used in photodetachment studies of the Group 14 negative ions Ge--, Sn--, and Pb-- at a photon wavelength of 532 nm. Measurements of the photoelectron angular distributions and asymmetry parameters for Ge-- and Sn-- were benchmarked against those measured previously [W. W. Williams, D. L. Carpenter, A. M. Covington, and J. S. Thompson, Phys. Rev. A 59, 4368 (1999), V. T. Davis, J. Ashokkumar, and J. S. Thompson, Phys. Rev. A 65, 024702 (2002)], while fine-structure-resolved asymmetry parameters for Pb-- were measured for the first time. Definitive evidence of a "forbidden" 4S 3/2→1D2 transition was observed in both the Ge-- and Sn-- photoelectron kinetic energy spectra. This transition is explained in terms of the inadequacy of the single-configuration description for the 1D2 excited state in the corresponding neutral. Near-threshold photodetachment studies of S-- were carried out in order to measure the spectral dependence of the photoelectron angular distribution. The resulting asymmetry parameters were measured at several photon wavelengths in the range of 575 nm (2.156 eV photon energy) to 615 nm (2.016 eV photon energy). Comparison of the measurements to a qualitative model of p-electron photodetachment [D. Hanstorp, C. Bengtsson, and D. J. Larson, Phys. Rev. A 40, 670 (1989)] were made. Deviations of the measured asymmetry parameters from the Hanstorp model near photodetachment thresholds suggests a reduced degree of suppression of d partial-waves than predicted by models. Measurement of the electron affinity of terbium was performed along with a determination of the structure of Tb--. The energy scale for the Tb-- photoelectron kinetic energy spectrum was calibrated to the photoelectron kinetic energy spectrum of Cs-- , whose electron affinity is well-known [T. A. Patterson, H. Hotop, A. Kasdan, D. W. Norcross, and W. C. Lineberger, Phys. Rev. Lett. 32 , 189 (1974)]. Comparison to a previous experimental measurement of the electron affinity of terbium [S. S. Duvvuri, Ph. D. dissertation, University of Nevada, Reno (2006)] and to theoretical calculations of the electron affinity [S. M. O'Malley and D. R. Beck, Phys. Rev. A 79, 012511 (2009)] were made. In contrast to the [Xe]4f106 s2 5I8 ground state configuration proposed in the experimental study and the [Xe]4f 85d6s26p 9G7 ground state configuration proposed in the theoretical study, the present study suggests a Tb-- ground state of [Xe]4f96s 26p 7I3 and an electron affinity of 0.13 +/- 0.07 eV for terbium.
ERIC Educational Resources Information Center
Sutton, Griffin P.; Barchard, Kimberly A.; Bello, Danielle T.; Thaler, Nicholas S.; Ringdahl, Erik; Mayfield, Joan; Allen, Daniel N.
2011-01-01
Evaluation of visuoconstructional abilities is a common part of clinical neuropsychological assessment, and the Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI; K. E. Beery & N. A. Beery, 2004) is often used for this purpose. However, few studies have examined its psychometric properties when used to assess children and…
Handwriting capacity in children newly diagnosed with Attention Deficit Hyperactivity Disorder.
Brossard-Racine, Marie; Majnemer, Annette; Shevell, Michael; Snider, Laurie; Bélanger, Stacey Ageranioti
2011-01-01
Preliminary evidence suggests that children with Attention Deficit Hyperactivity Disorder (ADHD) may exhibit handwriting difficulties. However, the exact nature of these difficulties and the extent to which they may relate to motor or behavioural difficulties remains unclear. The aim of this study was to describe handwriting capacity in children newly diagnosed with ADHD and identify predictors of performance. Forty medication-naïve children with ADHD (mean age 8.1 years) were evaluated with the Evaluation Tool of Children's Handwriting-Manuscript, the Movement Assessment Battery for Children (M-ABC), the Developmental Test of Visual Motor Integration (VMI) and the Conner Global Index. An important subset (85.0%) exhibited manual dexterity difficulties. Handwriting performance was extremely variable in terms of speed and legibility. VMI was the most important predictor of legibility. Upper extremity coordination, as measured by the M-ABC ball skills subtest, was also a good predictor of word legibility. Poor handwriting legibility and slow writing speed were common in children newly diagnosed with ADHD and were associated with motor abilities. Future studies are needed to determine whether interventions, including stimulant medications, can improve handwriting performance and related motor functioning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lahav, Orit; Apter, Alan; Ratzon, Navah Z
2013-01-01
This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low socioeconomic backgrounds, scoring below the 25th percentile on a measure of visual-motor integration (VMI) were recruited and randomly divided into two parallel intervention groups. One intervention group received directive visual-motor intervention (DVMI), while the second intervention group received a non-directive supportive intervention (NDSI). Tests were administered to evaluate visual-motor integration skills outcome. Children with higher baseline measures of psychological adjustment and self-esteem responded better in NDSI while children with lower baseline performance on psychological adjustment and self-esteem responded better in DVMI. This study suggests that children from low socioeconomic backgrounds with low VMI performance scores will benefit more from intervention programs if clinicians choose the type of intervention according to baseline psychological adjustment and self-esteem measures. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tkáč, Ondřej; Saha, Ashim K.; Loreau, Jérôme; Ma, Qianli; Dagdigian, Paul J.; Parker, David H.; van der Avoird, Ad; Orr-Ewing, Andrew J.
2015-12-01
Differential cross sections (DCSs) are reported for rotationally inelastic scattering of ND3 with H2, measured using a crossed molecular beam apparatus with velocity map imaging (VMI). ND3 molecules were quantum-state selected in the ground electronic and vibrational levels and, optionally, in the j±k = 11- rotation-inversion level prior to collisions. Inelastic scattering of state-selected ND3 with H2 was measured at the mean collision energy of 580 cm-1 by resonance-enhanced multiphoton ionisation spectroscopy and VMI of ND3 in selected single final j'±k' levels. Comparison of experimental DCSs with close-coupling quantum-mechanical scattering calculations serves as a test of a recently reported ab initio potential energy surface. Calculated integral cross sections reveal the propensities for scattering into various final j'±k' levels of ND3 and differences between scattering by ortho and para H2. Integral and differential cross sections are also computed at a mean collision energy of 430 cm-1 and compared to our recent results for inelastic scattering of state-selected ND3 with He.
Novel method of finding extreme edges in a convex set of N-dimension vectors
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
2001-11-01
As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.
Temple, V; Drummond, C; Valiquette, S; Jozsvai, E
2010-06-01
Video conferencing (VC) technology has great potential to increase accessibility to healthcare services for those living in rural or underserved communities. Previous studies have had some success in validating a small number of psychological tests for VC administration; however, VC has not been investigated for use with persons with intellectual disabilities (ID). A comparison of test results for two well known and widely used assessment instruments was undertaken to establish if scores for VC administration would differ significantly from in-person assessments. Nineteen individuals with ID aged 23-63 were assessed once in-person and once over VC using the Wechsler Abbreviated Scale of Intelligence (WASI) and the Beery-Buktenica Test of Visual-Motor Integration (VMI). Highly similar results were found for test scores. Full-scale IQ on the WASI and standard scores for the VMI were found to be very stable across the two administration conditions, with a mean difference of less than one IQ point/standard score. Video conferencing administration does not appear to alter test results significantly for overall score on a brief intelligence test or a test of visual-motor integration.
Introspections on the Semantic Gap
2015-04-14
cloud comput - ing. Zhang received an MS in computer science from Stony Brook University. Contact him at dozhang@ cs.stonybrook.edu. Donald E. Porter...designated by other documentation. ... 2 March/April 2015 Copublished by the IEEE Computer and Reliability Societies 1540-7993/15/$31.00 © 2015 IEEE IEEE S...pauses the VM, and the VMI tool introspects the process descriptor list. In contrast, an asynchronous mechanism would intro - spect memory
Bo, Jin; Colbert, Alison; Lee, Chi-Mei; Schaffert, Jeffrey; Oswald, Kaitlin; Neill, Rebecca
2014-09-01
Children with Developmental Coordination Disorder (DCD) often experience difficulties in handwriting. The current study examined the relationships between three motor assessments and the spatial and temporal consistency of handwriting. Twelve children with probable DCD and 29 children from 7 to 12 years who were typically developing wrote the lowercase letters "e" and "l" in cursive and printed forms repetitively on a digitizing tablet. Three behavioral assessments, including the Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI), the Minnesota Handwriting Assessment (MHA) and the Movement Assessment Battery for Children (MABC), were administered. Children with probable DCD had low scores on the VMI, MABC and MHA and showed high temporal, not spatial, variability in the letter-writing task. Their MABC scores related to temporal consistency in all handwriting conditions, and the Legibility scores in their MHA correlated with temporal consistency in cursive "e" and printed "l". It appears that children with probable DCD have prominent difficulties on the temporal aspect of handwriting. While the MHA is a good product-oriented assessment for measuring handwriting deficits, the MABC shows promise as a good assessment for capturing the temporal process of handwriting in children with DCD. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mashuri, Chamdan; Suryono; Suseno, Jatmiko Endro
2018-02-01
This research was conducted by prediction of safety stock using Fuzzy Time Series (FTS) and technology of Radio Frequency Identification (RFID) for stock control at Vendor Managed Inventory (VMI). Well-controlled stock influenced company revenue and minimized cost. It discussed about information system of safety stock prediction developed through programming language of PHP. Input data consisted of demand got from automatic, online and real time acquisition using technology of RFID, then, sent to server and stored at online database. Furthermore, data of acquisition result was predicted by using algorithm of FTS applying universe of discourse defining and fuzzy sets determination. Fuzzy set result was continued to division process of universe of discourse in order to be to final step. Prediction result was displayed at information system dashboard developed. By using 60 data from demand data, prediction score was 450.331 and safety stock was 135.535. Prediction result was done by error deviation validation using Mean Square Percent Error of 15%. It proved that FTS was good enough in predicting demand and safety stock for stock control. For deeper analysis, researchers used data of demand and universe of discourse U varying at FTS to get various result based on test data used.
Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Han, Fang; Scott, Stephen L
Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less
Limitations of the Neurological Evolutional Exam (ENE) as a motor assessment for first graders.
Caçola, Priscila M; Bobbio, Tatiana G; Arias, Amabile V; Gonçalves, Vanda G; Gabbard, Carl
2010-01-01
many clinicians and researchers in Brazil consider the Neurological Developmental Exam (NDE), a valid and reliable assessment for Brazilian school-aged children. However, since its inception, several tests have emerged that, according to some researchers, provide more in-depth evaluation of motor ability and go beyond the detection of general motor status (soft neurological signs). to highlight the limitations of the NDE as a motor skill assessment for first graders. thirty-five children were compared on seven selected items of the NDE, seven of the Bruininks-Oseretsky Test (BOT), and seven of the Visual-Motor Integration test (VMI). Participants received a "pass" or "fail" score for each item, as prescribed by the respective test manual. chi-square and ANOVA results indicated that the vast majority of children (74%) passed the NDE items, whereas values for the other tests were 29% (BOT) and 20% (VMI). Analysis of specific categories (e.g. visual, fine, and gross motor coordination) revealed a similar outcome. our data suggest that while the NDE may be a valid and reliable test for the detection of general motor status, its use as a diagnostic/remedial tool for identifying motor ability is questionable. One of our recommendations is the consideration of a revised NDE in light of the current needs of clinicians and researchers.
NASA Astrophysics Data System (ADS)
Pathak, Shashank; Robatjazi, Seyyed Javad; Wright Lee, Pearson; Raju Pandiri, Kanaka; Rolles, Daniel; Rudenko, Artem
2017-04-01
J.R. Macdonald Laboratory, Department of Physics, Kansas State University, Manhattan KS, USA We report on the development of a versatile experimental setup for XUV-IR pump-probe experiments using a 10 kHz high-harmonic generation (HHG) source and two different charged-particle momentum imaging spectrometers. The HHG source, based on a commercial KM Labs eXtreme Ultraviolet Ultrafast Source, is capable of delivering XUV radiation of less than 30 fs pulse duration in the photon energy range of 17 eV to 100 eV. It can be coupled either to a conventional velocity map imaging (VMI) setup with an atomic, molecular, or nanoparticle target; or to a novel double-sided VMI spectrometer equipped with two delay-line detectors for coincidence studies. An overview of the setup and results of first pump-probe experiments including studies of two-color double ionization of Xe and time-resolved dynamics of photoionized CO2 molecule will be presented. This project is supported in part by National Science Foundation (NSF-EPSCOR) Award No. IIA-1430493 and in part by the Chemical science, Geosciences, and Bio-Science division, Office of Basic Energy Science, Office of science, U.S. Department of Energy. K.
He, Tao; Anastasia, Mary Kay; Zsiska, Marianne; Farmer, Teresa; Schneiderman, Eva; Milleman, Jeffery L
2017-12-01
To evaluate the effect of a novel stannous fluoride dentifrice with zinc citrate on calculus inhibition using both in vitro and clinical models. Each investigation tested a novel stabilized 0.454% stannous fluoride dentifrice with zinc citrate as an anticalculus agent (Crest® Pro-Health™ smooth formula) compared to a negative control fluoride dentifrice. The in vitro study used the modified Plaque Growth and Mineralization Model (mPGM). Plaque biofilms were prepared and mineralized by alternate immersion of glass rods in human saliva and artificial mineralization solution. Treatments of 25% w/w dentifrice/water slurries were carried out for 60 seconds daily for 6 days, between saliva and mineralization solution immersions. Plaque calcium levels were determined by digestion and inductively coupled plasma optical emission spectroscopy. Student's t-test (p < 0.05) was used for statistical analysis. The clinical study was a parallel group, double-blind, randomized, and controlled trial. Following a dental prophylaxis, subjects entered a two-month run-in phase. At the end, they received a Volpe-Manhold Index (V-MI) calculus examination. Eighty (80) qualified subjects who had formed at least 9 mm of calculus on the linguals of the mandibular anterior teeth were re-prophied and randomly assigned to either the stannous fluoride dentifrice or the negative control. Subjects brushed twice daily, unsupervised, during the three-month test period, returning at Weeks 6 and 12 for safety and V-MI examinations. Statistical analyses were via ANCOVA. In vitro mPGM: The stabilized stannous fluoride dentifrice showed 20% less in vitro tartar formation, measured as calcium accumulation normalized by biofilm mass, versus the negative control (106.95 versus 133.04 µg Ca/mg biofilm, respectively, p < 0.05). Clinical Trial: Seventy-eight (78) subjects completed with fully evaluable data. The stannous fluoride dentifrice group had 15.1% less adjusted mean calculus at Week 6 compared to the negative control group (p = 0.05) and 21.7% less calculus at Week 12 (p < 0.01). Both dentifrices were well-tolerated. The stannous fluoride dentifrice produced significant anticalculus benefits in vitro and in a clinical trial compared to a negative control.
Puthanakit, Thanyawee; Ananworanich, Jintanat; Vonthanak, Saphonn; Kosalaraksa, Pope; Hansudewechakul, Rawiwan; van der Lugt, Jasper; Kerr, Stephen J.; Kanjanavanit, Suparat; Ngampiyaskul, Chaiwat; Wongsawat, Jurai; Luesomboon, Wicharn; Vibol, Ung; Pruksakaew, Kanchana; Suwarnlerk, Tulathip; Apornpong, Tanakorn; Ratanadilok, Kattiya; Paul, Robert; Mofenson, Lynne M.; Fox, Lawrence; Valcour, Victor; Brouwers, Pim; Ruxrungtham, Kiat
2013-01-01
Background We previously reported similar AIDS-free survival at 3 years in children who were >1 year old initiating antiretroviral therapy (ART) and randomized to early vs. deferred ART in the PREDICT Study. We now report neurodevelopmental outcomes. Methods 284 HIV-infected Thai and Cambodian children aged 1–12 years with CD4 counts between 15–24% and no AIDS-defining illness were randomized to initiate ART at enrollment (“early”, n=139) or when CD4 count became <15% or a CDC C event developed (“deferred”, n=145). All underwent age-appropriate neurodevelopment testing including Beery Visual Motor Integration (VMI), Purdue Pegboard, Color Trails and Child Behavioral Checklist (CBCL). Thai children (n=170) also completed Wechsler Intelligence Scale (IQ) and Stanford Binet Memory test. We compared week 144 measures by randomized group and to HIV-uninfected children (n=319). Results At week 144, the median age was 9 years and 69 (48%) of the deferred arm children had initiated ART. The early arm had a higher CD4 (33% vs. 24%, p<0.001) and a greater percentage of children with viral suppression (91% vs. 40%, p<0.001). Neurodevelopmental scores did not differ by arm and there were no differences in changes between arms across repeated assessments in time-varying multivariate models. HIV-infected children performed worse than uninfected children on IQ, Beery VMI, Binet memory and CBCL Conclusions In HIV-infected children surviving beyond one year of age without ART, neurodevelopmental outcomes were similar with ART initiation at CD4 15–24% vs. < 15%; but both groups performed worse than HIV-uninfected children. The window of opportunity for a positive effect of ART initiation on neurodevelopment may remain in infancy. PMID:23263176
Fairbrother, K J; Kowolik, M J; Curzon, M E; Müller, I; McKeown, S; Hill, C M; Hannigan, C; Bartizek, R D; White, D J
1997-01-01
Three triclosan-containing "multi-benefit" dentifrices were compared for clinical efficacy in reducing supragingival calculus formation following a dental prophylaxis. A total of 544 subjects completed a double-blind parallel-group clinical study using the Volpe-Manhold Index (VMI) to record severity and occurrence of supragingival calculus. The study design included a pre-test period where the calculus formation rate was measured in subjects brushing with a placebo dentifrice. Following a prophylaxis, subjects were stratified for age, gender and VMI scores and assigned to one of four treatments: 1) a dentifrice containing 5.0% soluble pyrophosphate/0.145% fluoride as NaF/silica abrasive/0.28% triclosan (hereafter PPi/TCS-comparable to Crest Complete dentifrice, Procter & Gamble, UK); 2) a commercial dentifrice containing 2.0% Gantrez acid copolymer/ 0.145% fluoride as NaF/silica abrasive/0.30% triclosan (hereafter Gan/TCS-Colgate Total dentifrice, Colgate-Palmolive Company, UK); 3) a commercial dentifrice containing 0.5% zinc citrate trihydrate/0.15% fluoride as sodium monofluorophosphate/silica abrasive/0.20% triclosan (hereafter Zn/TCS-Mentadent P dentifrice, Unilever, UK); and 4) a control dentifrice comprised of 0.145% fluoride as NaF/silica abrasive (hereafter Control). Subjects were instructed to use their assigned dentifrice at least twice per day and to brush as they do normally. Supragingival calculus formation was assesed at two and four months using site-specific and whole-mouth VMI indices for both calculus severity and occurrence. Following four months of use, the PPi/TCS dentifrice provided statistically significant reductions in calculus severity (22-23%) and occurrence (15%) as compared with the Control dentifrice. The Zn/TCS dentifrice also provided significant reductions in calculus severity (17-19%) and occurrence (12-13%) as compared with the Control. The Gan/TCS produced no statistically significant reductions in calculus formation (occurrence or severity) compared with the Control. The PPi/TCS dentifrice provided statistically significant reductions in calculus severity (15-21%) and occurrence (12-16%) as compared with the Gan/TCS dentifrice. These results support the clinical effectiveness of PPi/TCS and Zn/TCS dentifrices for the reduction of supragingival dental calculus formation following a dental prophylaxis.
Role of p53 in cdk Inhibitor VMY-1-103-induced Apoptosis in Prostate Cancer
2013-11-01
DAOY medulloblastoma cells, which have a p53 mutation (6). In order to examine if this holds true in prostate cancer cell lines, I stably transfected...disrupts chromosome organization and delays metaphase progression in medulloblastoma cells. Cancer Biol Ther. 2011 Nov 1;12(9):818-26 Other...1-103 is a novel CDK inhibitor that disrupts chromosome organization and delays metaphase progression in medulloblastoma cells. Cancer Biol Ther
Role of p53 in cdk Inhibitor VMY-1-103-Induced Apoptosis in Prostate Cancer
2012-09-01
trioxide inhibits human cancer cell growth and tumor development in mice by blocking Hedgehog /GLI pathway. J Clin Invest. 2011 Jan 4;121(1):148- 60...subclassified the tumors based on gene expression patterns and chromosomal abnormalities.4-6 Dysregulation of Hedgehog (Hh) signaling, defined as the c3...Eberhart CG. Hedgehog signaling promotes medulloblastoma survival via Bc/II. Am J Pathol 2007; 170:347-55; PMID:17200206; DOI:10.2353/ajpath
NASA Astrophysics Data System (ADS)
Qi, Wenke; Jiang, Pan; Lin, Dan; Chi, Xiaoping; Cheng, Min; Du, Yikui; Zhu, Qihe
2018-01-01
A mini time-sliced ion velocity map imaging photofragment translational spectrometer using low voltage acceleration has been constructed. The innovation of this apparatus adopts a relative low voltage (30-150 V) to substitute the traditional high voltage (650-4000 V) to accelerate and focus the fragment ions. The overall length of the flight path is merely 12 cm. There are many advantages for this instrument, such as compact structure, less interference, and easy to operate and control. Low voltage acceleration gives a longer turn-around time to the photofragment ions forming a thicker Newton sphere, which provides sufficient time for slicing. Ion trajectory simulation has been performed for determining the structure dimensions and the operating voltages. The photodissociation and multiphoton ionization of O2 at 224.999 nm is used to calibrate the ion images and examine the overall performance of the new spectrometer. The velocity resolution (Δν/ν) of this spectrometer from O2 photodissociation is about 0.8%, which is better than most previous results using high acceleration voltage. For the case of CF3I dissociation at 277.38 nm, many CF3 vibrational states have been resolved, and the anisotropy parameter has been measured. The application of low voltage acceleration has shown its advantages on the ion velocity map imaging (VMI) apparatus. The miniaturization of the VMI instruments can be realized on the premise of high resolution.
Ehlers, Justis P; Han, Jaehong; Petkovsek, Daniel; Kaiser, Peter K; Singh, Rishi P; Srivastava, Sunil K
2015-11-01
To assess retinal architectural alterations that occur following membrane peeling procedures and the impact of peel technique on these alterations utilizing intraoperative optical coherence tomography (iOCT). This is a subanalysis of the prospective PIONEER iOCT study of eyes undergoing a membrane peeling for a vitreomacular interface (VMI) disorder. Intraoperative scanning was performed with a microscope-mounted OCT system. Macroarchitectural alterations (e.g., full-thickness retinal elevations) and microarchitectural alterations (e.g., relative layer thickness alterations) were analyzed. Video/iOCT correlation was performed to identify instrument-tissue manipulations resulting in macroarchitectural alterations. One hundred sixty-three eyes were included in the macroarchitectural analysis. Instrumentation utilized for membrane peeling included forceps alone for 73 eyes (45%), combined diamond-dusted membrane scraper (DDMS) and forceps for 87 eyes (53%), and other techniques in three eyes (2%). Focal retinal elevations were identified in 45 of 163 eyes (28%). Video/iOCT correlation identified 69% of alterations involved forceps compared to 26% due to DDMS. Sixteen percent of retinal alterations persisted 1 month following surgery. The microarchitectural analysis included 134 eyes. Immediately following membrane peeling, there was a significant increase in the ellipsoid zone to retinal pigment epithelium height (+20%, P < 0.00001) and the cone outer segment tips to retinal pigment epithelium height (+18%, P < 0.00001). Significant subclinical retinal architectural changes occur during membrane peeling for VMI conditions. Differences in surgical instruments may impact these architectural alterations.
Turner, Benjamin; Kennedy, Areti; Kendall, Melissa; Muenchberger, Heidi
2014-01-01
To examine the effectiveness of a targeted training approach to foster and support a peer-professional workforce in the delivery of a community rehabilitation program for adults with acquired brain injury (ABI) and their families. A prospective longitudinal design was used to evaluate the effectiveness of a targeted two-day training forum for peer (n = 25) and professional (n = 15) leaders of the Skills to Enable People and Communities Program. Leaders completed a set of questionnaires (General Self-Efficacy Scale - GSES, Rosenberg Self-Esteem Scale, Volunteer Motivation Inventory - VMI and Community Involvement Scale - CIS) both prior to and immediately following the forum. Data analysis entailed paired sample t-test to explore changes in scores over time, and independent sample t-tests for comparisons between the two participant groups. The results indicated a significant increase in scores over time for the GSES (p = 0.047). Improvements in leaders' volunteer motivations and community involvement were also observed between the two time intervals. The between group comparisons highlighted that the peer leader group scored significantly higher than the professional leader group on the CIS and several domains of the VMI at both time intervals. The study provides an enhanced understanding of the utility of innovative workforce solutions for community rehabilitation after ABI; and further highlights the benefits of targeted training approaches to support the development of such workforce configurations.
Kleber, C J; Putt, M S; Milleman, J L; Harris, M
1998-01-01
This clinical study compared the effect of a dental floss containing 0.25 mg tetrasodium pyrophosphate per cm and a placebo floss on supragingival calculus formation using a 6-week, partial-mouth toothshield model. The six lower anterior teeth were scaled and polished before each 2-week period (i.e., pre-trial, washout, trial). During both the pre-trial and trial periods, subjects brushed twice daily with a non-tartar control dentifrice, while a toothshield protected the six test teeth from brushing. After rinsing with water and removing the shield, they flossed the test teeth. All subjects used placebo floss during the pre-trial period in order to determine the baseline Volpe-Manhold Index (VMI) calculus formation scores, which were used to balance groups for the trial period. During the trial period, one group used the placebo floss, while the second group used the pyrophosphate floss. The final results demonstrated that the pyrophosphate floss significantly inhibited calculus formation between teeth (mesial-distal scores) by 21%, and on labial surfaces by 37% relative to the placebo floss.
Samango-Sprouse, Carole; Lawson, Patrick; Sprouse, Courtney; Stapleton, Emily; Sadeghin, Teresa; Gropman, Andrea
2016-05-01
Kleefstra syndrome (KS) is a rare neurogenetic disorder most commonly caused by deletion in the 9q34.3 chromosomal region and is associated with intellectual disabilities, severe speech delay, and motor planning deficits. To our knowledge, this is the first patient (PQ, a 6-year-old female) with a 9q34.3 deletion who has near normal intelligence, and developmental dyspraxia with childhood apraxia of speech (CAS). At 6, the Wechsler Preschool and Primary Intelligence testing (WPPSI-III) revealed a Verbal IQ of 81 and Performance IQ of 79. The Beery Buktenica Test of Visual Motor Integration, 5th Edition (VMI) indicated severe visual motor deficits: VMI = 51; Visual Perception = 48; Motor Coordination < 45. On the Receptive One Word Picture Vocabulary Test-R (ROWPVT-R), she had standard scores of 96 and 99 in contrast to an Expressive One Word Picture Vocabulary-R (EOWPVT-R) standard scores of 73 and 82, revealing a discrepancy in vocabulary domains on both evaluations. Preschool Language Scale-4 (PLS-4) on PQ's first evaluation reveals a significant difference between auditory comprehension and expressive communication with standard scores of 78 and 57, respectively, further supporting the presence of CAS. This patient's near normal intelligence expands the phenotypic profile as well as the prognosis associated with KS. The identification of CAS in this patient provides a novel explanation for the previously reported speech delay and expressive language disorder. Further research is warranted on the impact of CAS on intelligence and behavioral outcome in KS. Therapeutic and prognostic implications are discussed. © 2016 Wiley Periodicals, Inc.
Influence of long-range Coulomb interaction in velocity map imaging.
Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C
2017-07-07
The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.
NASA Astrophysics Data System (ADS)
Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia
2017-09-01
Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.
Axford, Caitlin; Joosten, Annette V; Harris, Courtenay
2018-04-01
Children are reported to spend less time engaged in outdoor activity and object-related play than in the past. The increased use and mobility of technology, and the ease of use of tablet devices are some of the factors that have contributed to these changes. Concern has been raised that the use of such screen and surface devices in very young children is reducing their fine motor skill development. We examined the effectiveness of iPad applications that required specific motor skills designed to improve fine motor skills. We conducted a two-group non-randomised controlled trial with two pre-primary classrooms (53 children; 5-6 years) in an Australian co-educational school, using a pre- and post-test design. The effectiveness of 30 minutes daily use of specific iPad applications for 9 weeks was compared with a control class. Children completed the Beery Developmental Test of Visual Motor Integration (VMI) and observation checklist, the Shore Handwriting Screen, and self-care items from the Hawaii Early Learning Profile. On post testing, the experimental group made a statistically and clinically significant improvement on the VMI motor coordination standard scores with a moderate clinical effect size (P < 0.001; d = 0.67). Children's occupational performance in daily tasks also improved. Preliminary evidence was gained for using the iPad, with these motor skill-specific applications as an intervention in occupational therapy practice and as part of at home or school play. © 2018 Occupational Therapy Australia.
NASA Astrophysics Data System (ADS)
Othman, Yahia Abdelrahman
Demand for New Mexico's limited water resources coupled with periodic drought has increased the need to schedule irrigation of pecan orchards based on tree water status. The overall goal of this research was to develop advanced tree water status sensing techniques to optimize irrigation scheduling of pecan orchards. To achieve this goal, I conducted three studies in the La Mancha and Leyendecker orchards, both mature pecan orchards located in the Mesilla Valley, New Mexico. In the first study, I screened leaf-level physiological changes that occurred during cyclic irrigation to determine parameters that best represented changes in plant moisture status. Then, I linked plant physiological changes to remotely-sensed surface reflectance data derived from Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+). In the second study, I assessed the impact of water deficits that developed during the flood irrigation dry-down cycles on photosynthesis (A) and gas exchange and established preliminary water deficit thresholds of midday stem water potential (Psi smd) critical to A and gas exchange of pecans. In a third study, I investigated whether hyperspectral data obtained from a handheld spectroradiometer and multispectral remotely-sensed data derived from Landsat 7 ETM+ and Landsat 8 Operational Land Imager (OLI) could detect moisture status in pecans during cyclic flood irrigations. I conducted the first study simultaneously in both orchards. Leaf-level physiological responses and remotely-sensed surface reflectance data were collected from trees that were either well watered or in water deficit. Midday stem water potential was the best leaf-level physiological response to detect moisture status in pecans. Multiple linear regression between Psismd and vegetation indices revealed a significant relationship (R 2 = 0.54) in both orchards. Accordingly, I concluded that remotely-sensed multispectral data form Landsat TMETM+ holds promise for detecting the moisture status of pecans. I conducted the second study simultaneously on the same mature pecan orchards that were used in the first study. Photosynthesis and gas exchange were assessed at Psismd of -0.4 to -2.0 MPa. This study established preliminary values of Psismd that significantly impacted A and gas exchange of field-grown pecans. I recommended that pecan orchards be maintained at Psismd that ranged between -0.80 to -0.90 MPa to prevent significant reductions in A and gas exchange. Broken-line analysis revealed that A remained relatively constant when Psismd was above -0.65 MPa. Conversely, there was linear positive relationship between Psi smd and A when Psismd was less than -0.65 MPa. In the third study, again conducted on both orchards, leaf-level physiological measurements and remotely-sensed data were taken at Psismd levels of -0.40 to -0.85 MPa, -0.95 to -1.45 MPa , and -1.5 to -2.0 MPa. Hyperspectral reflectance indices (from handheld spectroradiometer) detected moisture status in pecan trees better than multispectral reflectance indices (from Landsat ETM+OLI). Vegetation moisture index-I (VMI-I) and vegetation moisture index-II (VMI-II) significantly correlated with Psismd (VMI-I, 0.88 > r > 0.87; VMI-II, -0.68 > r > -0.65). Vegetation moisture index-I Boxplot analysis did not clearly separate moderate water status (-0.95 to -1.45 MPa) at La Mancha, but did so at Leyendecker. However, multispectral reflectance indices had a limited capacity to precisely detect the moderate water status at both orchards (the time when A declined by 15 - 40 %). Given that Psi smd of-0.90 to -1.45 MPa is a critical range for irrigating pecans, I concluded that vegetation indices derived only from hyperspectral reflectance data could be used to detect plant physiological responses that are related to plant water status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhou; Chang, Yih Chung; Gao, Hong
2014-06-21
We present a generally applicable experimental method for the direct measurement of nascent spin-orbit state distributions of atomic photofragments based on the detection of vacuum ultraviolet (VUV)-excited autoionizing-Rydberg (VUV-EAR) states. The incorporation of this VUV-EAR method in the application of the newly established VUV-VUV laser velocity-map-imaging-photoion (VMI-PI) apparatus has made possible the branching ratio measurement for correlated spin-orbit state resolved product channels, CO(ã{sup 3}Π; v) + O({sup 3}P{sub 0,1,2}) and CO(Χ{sup ~1}Σ{sup +}; v) + O({sup 3}P{sub 0,1,2}), formed by VUV photoexcitation of CO{sub 2} to the 4s(1{sub 0}{sup 1}) Rydberg state at 97,955.7 cm{sup −1}. The total kinetic energy releasemore » (TKER) spectra obtained from the O{sup +} VMI-PI images of O({sup 3}P{sub 0,1,2}) reveal the formation of correlated CO(ã{sup 3}Π; v = 0–2) with well-resolved v = 0–2 vibrational bands. This observation shows that the dissociation of CO{sub 2} to form the spin-allowed CO(ã{sup 3}Π; v = 0–2) + O({sup 3}P{sub 0,1,2}) channel has no potential energy barrier. The TKER spectra for the spin-forbidden CO(Χ{sup ~1}Σ{sup +}; v) + O({sup 3}P{sub 0,1,2}) channel were found to exhibit broad profiles, indicative of the formation of a broad range of rovibrational states of CO(Χ{sup ~1}Σ{sup +}) with significant vibrational populations for v = 18–26. While the VMI-PI images for the CO(ã{sup 3}Π; v = 0–2) + O({sup 3}P{sub 0,1,2}) channel are anisotropic, indicating that the predissociation of CO{sub 2} 4s(1{sub 0}{sup 1}) occurs via a near linear configuration in a time scale shorter than the rotational period, the angular distributions for the CO(Χ{sup ~1}Σ{sup +}; v) + O({sup 3}P{sub 0,1,2}) channel are close to isotropic, revealing a slower predissociation process, which possibly occurs on a triplet surface via an intersystem crossing mechanism.« less
Neurodevelopmental outcomes in HIV-exposed-uninfected children versus those not exposed to HIV
Kerr, Stephen J.; Puthanakit, Thanyawee; Vibol, Ung; Aurpibul, Linda; Vonthanak, Sophan; Kosalaraksa, Pope; Kanjanavanit, Suparat; Hansudewechakul, Rawiwan; Wongsawat, Jurai; Luesomboon, Wicharn; Ratanadilok, Kattiya; Prasitsuebsai, Wasana; Pruksakaew, Kanchana; van der Lugt, Jasper; Paul, Robert; Ananworanich, Jintanat; Valcour, Victor
2014-01-01
Human immunodeficiency virus (HIV)-negative children born to HIV-infected mothers may exhibit differences in neurodevelopment (ND) compared to age- and gender-matched controls whose lives have not been affected by HIV. This could occur due to exposure to HIV and antiretroviral agents in utero and perinatally, or differences in the environment in which they grow up. This study assessed neurodevelopmental outcomes in HIV-exposed uninfected (HEU) and HIV-unexposed uninfected (HUU) children enrolled as controls in a multicenter ND study from Thailand and Cambodia. One hundred sixty HEU and 167 HUU children completed a neurodevelopmental assessment using the Beery Visual Motor Integration (VMI) test, Color Trails, Perdue Pegboard, and Child Behavior Checklist (CBCL). Thai children (n = 202) also completed the Wechsler Intelligence Scale (IQ) and Stanford-Binet II memory tests. In analyses adjusted for caregiver education, parent as caregiver, household income, age, and ethnicity, statistically significant lower scores were seen on verbal IQ (VIQ), full-scale IQ (FSIQ), and Binet Bead Memory among HEU compared to HUU. The mean (95% CI) differences were −6.13 (−10.3 to −1.96), p = 0.004; −4.57 (−8.80 to −0.35), p = 0.03; and −3.72 (−6.57 to −0.88), p = 0.01 for VIQ, FSIQ, and Binet Bead Memory, respectively. We observed no significant differences in performance IQ, other Binet memory domains, Color Trail, Perdue Pegboard, Beery VMI, or CBCL test scores. We conclude that HEU children evidence reductions in some neurodevelopmental outcomes compared to HUU; however, these differences are small and it remains unclear to what extent they have immediate and long-term clinical significance. PMID:24878112
Neurodevelopmental outcomes in HIV-exposed-uninfected children versus those not exposed to HIV.
Kerr, Stephen J; Puthanakit, Thanyawee; Vibol, Ung; Aurpibul, Linda; Vonthanak, Sophan; Kosalaraksa, Pope; Kanjanavanit, Suparat; Hansudewechakul, Rawiwan; Wongsawat, Jurai; Luesomboon, Wicharn; Ratanadilok, Kattiya; Prasitsuebsai, Wasana; Pruksakaew, Kanchana; van der Lugt, Jasper; Paul, Robert; Ananworanich, Jintanat; Valcour, Victor
2014-01-01
Human immunodeficiency virus (HIV)-negative children born to HIV-infected mothers may exhibit differences in neurodevelopment (ND) compared to age- and gender-matched controls whose lives have not been affected by HIV. This could occur due to exposure to HIV and antiretroviral agents in utero and perinatally, or differences in the environment in which they grow up. This study assessed neurodevelopmental outcomes in HIV-exposed uninfected (HEU) and HIV-unexposed uninfected (HUU) children enrolled as controls in a multicenter ND study from Thailand and Cambodia. One hundred sixty HEU and 167 HUU children completed a neurodevelopmental assessment using the Beery Visual Motor Integration (VMI) test, Color Trails, Perdue Pegboard, and Child Behavior Checklist (CBCL). Thai children (n = 202) also completed the Wechsler Intelligence Scale (IQ) and Stanford-Binet II memory tests. In analyses adjusted for caregiver education, parent as caregiver, household income, age, and ethnicity, statistically significant lower scores were seen on verbal IQ (VIQ), full-scale IQ (FSIQ), and Binet Bead Memory among HEU compared to HUU. The mean (95% CI) differences were -6.13 (-10.3 to -1.96), p = 0.004; -4.57 (-8.80 to -0.35), p = 0.03; and -3.72 (-6.57 to -0.88), p = 0.01 for VIQ, FSIQ, and Binet Bead Memory, respectively. We observed no significant differences in performance IQ, other Binet memory domains, Color Trail, Perdue Pegboard, Beery VMI, or CBCL test scores. We conclude that HEU children evidence reductions in some neurodevelopmental outcomes compared to HUU; however, these differences are small and it remains unclear to what extent they have immediate and long-term clinical significance.
Motor, cognitive, and behavioural disorders in children born very preterm.
Foulder-Hughes, L A; Cooke, R W I
2003-02-01
Children born preterm have been shown to exhibit poor motor function and behaviour that is associated with school failure in the presence of average intelligence. A geographically determined cohort of two-hundred and eighty preterm children (151 males, 129 females) born before 32 weeks' gestation and attending mainstream schools were examined at 7 to 8 years of age together with 210 (112 males, 98 females) age- and sex-matched control participants were tested for motor, cognitive, and behavioural problems. Tests applied were the Movement Assessment Battery for Children (MABC), Clinical Observations of Motor and Postural Skills (COMPS), Developmental Test of Visual-Motor Integration (VMI), Wechsler Intelligence Scale for Children, and Connors' Teacher Rating Scale for attention-deficit-hyperactivity disorder (ADHD). Control children scored significantly better than the preterm group on all motor, cognitive, and behavioural measures. The lowest birthweight and most preterm individuals tended to score the lowest. Motor impairment was diagnosed in 86 (30.7%) of the preterm group and 14 (6.7%) of the control children using the MABC; 97 (42.7%) and 18 (10.2%) using the COMPS; and 68 (24.3%) and 17 (8.1%) respectively using the VMI. Each test of motor function identified different children with disability, although 23 preterm children were identified as having motor disability by all three tests. Preterm children were more likely to have signs of inattention and impulsivity and have a diagnosis of ADHD. Minor motor disabilities persist in survivors of preterm birth despite improvements in care and are not confined to the smallest or most preterm infants. They may exist independently of cognitive and behavioural deficits, although they often co-exist. The condition is heterogeneous and may require more than one test to identify all children with potential learning problems.
Vector Meson Production at Hera
NASA Astrophysics Data System (ADS)
Szuba, Dorota
The diffractive production of vector mesons ep→eVMY, with VM=ρ0, ω, ϕ, J/ψ, ψ‧ or ϒ and with Y being either the scattered proton or a low mass hadronic system, has been extensively investigated at HERA. HERA offers a unique opportunity to study the dependences of diffractive processes on different scales: the mass of the vector meson, mVM, the centre-of-mass energy of the γp system, W, the photon virtuality, Q2 and the four-momentum transfer squared at the proton vertex, |t|. Strong interactions can be investigated in the transition from the hard to the soft regime, where the confinement of quarks and gluons occurs.
Application of Acoustic Signal Processing Techniques to Seismic Data.
1977-06-30
Sr(U So 9RU FEND LOA SUNIT STA PLC3 LOA I STA OLC4 F A FORRE-C ;3f F’RARD I REC9; T9 3ET PAST T-4E E24F LIZE 2 0’ LC3 r ZE 0 PLC4 t~zs 0 FEND STZ 3AKSCN...0)0.4 E 4.1 33 Ŕ. 37.0 12-46-22.9 1?-34-07 .1 M VMiHM ,P1W LJ 31 wEST ER"’ %EDITEPRANFAN AqPA 301 ALBANIA P62 1220 F2 6P 2 2? 243990 al.0 N 20.4 E
NASA Astrophysics Data System (ADS)
Elias, Nurainaa; Mat Yahya, Nafrizuan
2018-04-01
Chin stands aid is a device designed to reduce fatigue on the chin during the Visual Mechanical Inspection (VMI) task for operators in TT Electronic Sdn Bhd, Kuantan, Malaysia. It is also used to reduce cycle time and also improve employee well-being in terms of comfort. In this project, a 3D model of chin stands aid with an ergonomics approach is created using SOLIDWORKS software. Two different concepts were designed and the best one is chosen based on the Pugh concept selection method, concept screening and also concept scoring. After the selection of concepts is done, a prototype of chin stands aid will be developed and a simulation of the prototype is performed. The simulation has been executed by using Workbench ANSYS software as a tool. Stress analysis, deformation analysis, and fatigue analysis have been done to know the strength and lifespan of the product. The prototype also has been tested to know the functionality and also comfortability for the user to use the chin stands aid.
Novel applications of X-ray photoelectron spectroscopy on unsupported nanoparticles
NASA Astrophysics Data System (ADS)
Kostko, Oleg; Xu, Bo; Jacobs, Michael I.; Ahmed, Musahid
X-ray photoelectron spectroscopy (XPS) is a powerful technique for chemical analysis of surfaces. We will present novel results of XPS on unsupported, gas-phase nanoparticles using a velocity-map imaging (VMI) spectrometer. This technique allows for probes of both the surfaces of nanoparticles via XPS as well as their interiors via near edge X-ray absorption fine structure (NEXAFS) spectroscopy. A recent application of this technique has confirmed that arginine's guanidinium group exists in a protonated state even in strongly basic solution. Moreover, the core-level photoelectron spectroscopy can provide information on the effective attenuation length (EAL) of low kinetic energy electrons. This contradictory value is important for determining the probing depth of XPS and in photolithography. A new method for determining EALs will be presented.
Lin, Hsi-Hsien; Chang, Ming-Chau; Wang, Shih-Tien; Liu, Chien-Lin; Chou, Po-Hsin
2018-06-01
Polymethylmethacrylate (PMMA) augmentation is a common method to increase pullout strength fixed for osteoporotic spines. However, few papers evaluated whether these pedicle screws migrated with time and functional outcome in these geriatrics following PMMA-augmented pedicle screw fixation. From March 2006 to September 2008, consecutive 64 patients were retrospectively enrolled. VAS and ODI were used to evaluate functional outcomes. Kyphotic angle at instrumented levels and horizontal and vertical distances (HD and VD) between screw tip and anterior and upper cortexes were evaluated. To avoid bias, we used horizontal and vertical migration index (HMI and VMI) to re-evaluate screw positions with normalization by the mean of superior and inferior endplates or anterior and posterior vertebral body height, respectively. Forty-six patients with 282 PMMA-augmented screws were analyzed with mean follow-up of 95 months. Nine patients were further excluded due to bed-ridden at latest follow-up. Twenty-six females and 11 males with mean T score of - 2.7 (range, - 2.6 to - 4.1) and mean age for operation of 77.6 ± 4.3 years (range, 65 to 86). The serial HD and kyphotic angle statistically progressed with time. The serial VD did not statistically change with time (p = 0.23), and neither HMI nor VMI (p = 0.772 and 0.631). Pre-operative DEXA results did not correlate with kyphotic angle. Most patients (80.4%) maintained similar functional outcomes at latest follow-up. The incidence of screws loosening was 2.7% of patients and 1.4% of screws, respectively. The overall incidences of systemic post-operative co-morbidities were 24.3% with overall 20.2 days for hospitalization. Most patients (80%) remained similar functional outcomes at latest follow-up in spite of kyphosis progression. The incidence of implant failure was not high, but the post-operative systemic co-morbidities were higher, which has to be informed before index surgery.
Velocity map imaging using an in-vacuum pixel detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gademann, Georg; Huismans, Ymkje; Gijsbertsen, Arjan
The use of a new type in-vacuum pixel detector in velocity map imaging (VMI) is introduced. The Medipix2 and Timepix semiconductor pixel detectors (256x256 square pixels, 55x55 {mu}m{sup 2}) are well suited for charged particle detection. They offer high resolution, low noise, and high quantum efficiency. The Medipix2 chip allows double energy discrimination by offering a low and a high energy threshold. The Timepix detector allows to record the incidence time of a particle with a temporal resolution of 10 ns and a dynamic range of 160 {mu}s. Results of the first time application of the Medipix2 detector to VMImore » are presented, investigating the quantum efficiency as well as the possibility to operate at increased background pressure in the vacuum chamber.« less
Hop, Kevin D.; Strassman, Andrew C.; Nordman, Carl; Pyne, Milo; White, Rickie; Jakusz, Joseph; Hoy, Erin E.; Dieck, Jennifer
2016-01-01
The National Park Service (NPS) Vegetation Mapping Inventory (VMI) Program is an effort to classify, describe, and map existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VMI Program is managed by the NPS I&M Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey Upper Midwest Environmental Sciences Center, NatureServe, NPS Gulf Coast Network, and NPS Natchez Trace Parkway (NATR; also referred to as Parkway) have completed vegetation classification and mapping of NATR for the NPS VMI Program.Mappers, ecologists, and botanists collaborated to affirm vegetation types within the U.S. National Vegetation Classification (USNVC) of NATR and to determine how best to map them by using aerial imagery. Analyses of data from 589 vegetation plots had been used to describe an initial 99 USNVC associations in the Parkway; this classification work was completed prior to beginning this NATR vegetation mapping project. Data were collected during this project from another eight quick plots to support new vegetation types not previously identified at the Parkway. Data from 120 verification sites were collected to test the field key to vegetation associations and the application of vegetation associations to a sample set of map polygons. Furthermore, data from 900 accuracy assessment (AA) sites were collected (of which 894 were used to test accuracy of the vegetation map layer). The collective of all these datasets resulted in affirming 122 USNVC associations at NATR.To map the vegetation and open water of NATR, 63 map classes were developed. including the following: 54 map classes represent natural (including ruderal) vegetation types in the USNVC, 5 map classes represent cultural (agricultural and developed) vegetation types in the USNVC, 3 map classes represent nonvegetation open-water bodies (non-USNVC), and 1 map class represents landscapes that had received tornado damage a few months prior to the time of aerial imagery collection. Features were interpreted from viewing 4-band digital aerial imagery by means of digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems. (The aerial imagery was collected during mid-October 2011 for the northern reach of the Parkway and mid-November 2011 for the southern reach of the Parkway to capture peak leaf-phenology of trees.) The interpreted data were digitally and spatially referenced, thus making the spatial-database layers usable in geographic information systems. Polygon units were mapped to either a 0.5 hectare (ha) or 0.25 ha minimum mapping unit, depending on vegetation type or scenario.A geodatabase containing various feature-class layers and tables present the locations of USNVC vegetation types (vegetation map), vegetation plot samples, verification sites, AA sites, project boundary extent, and aerial image centers. The feature-class layer and related tables for the vegetation map provide 13,529 polygons of detailed attribute data covering 21,655.5 ha, with an average polygon size of 1.6 ha; the vegetation map coincides closely with the administrative boundary for NATR.Summary reports generated from the vegetation map layer of the map classes representing USNVC natural (including ruderal) vegetation types apply to 12,648 polygons (93.5% of polygons) and cover 18,542.7 ha (85.6%) of the map extent for NATR. The map layer indicates the Parkway to be 70.5% forest and woodland (15,258.7 ha), 0.3% shrubland (63.0 ha), and 14.9% herbaceous cover (3,221.0 ha). Map classes representing USNVC cultural types apply to 678 polygons (5.0% of polygons) and cover 2,413.9 ha (11.1%) of the map extent.
Hop, Kevin D.; Strassman, Andrew C.; Hall, Mark; Menard, Shannon; Largay, Ery; Sattler, Stephanie; Hoy, Erin E.; Ruhser, Janis; Hlavacek, Enrika; Dieck, Jennifer
2017-01-01
The National Park Service (NPS) Vegetation Mapping Inventory (VMI) Program classifies, describes, and maps existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VMI Program is managed by the NPS I&M Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey Upper Midwest Environmental Sciences Center, NatureServe, NPS Northeast Temperate Network, and NPS Appalachian National Scenic Trail (APPA) have completed vegetation classification and mapping of APPA for the NPS VMI Program.Mappers, ecologists, and botanists collaborated to affirm vegetation types within the U.S. National Vegetation Classification (USNVC) of APPA and to determine how best to map the vegetation types by using aerial imagery. Analyses of data from 1,618 vegetation plots were used to describe USNVC associations of APPA. Data from 289 verification sites were collected to test the field key to vegetation associations and the application of vegetation associations to a sample set of map polygons. Data from 269 validation sites were collected to assess vegetation mapping prior to submitting the vegetation map for accuracy assessment (AA). Data from 3,265 AA sites were collected, of which 3,204 were used to test accuracy of the vegetation map layer. The collective of these datasets affirmed 280 USNVC associations for the APPA vegetation mapping project.To map the vegetation and land cover of APPA, 169 map classes were developed. The 169 map classes consist of 150 that represent natural (including ruderal) vegetation types in the USNVC, 11 that represent cultural (agricultural and developed) vegetation types in the USNVC, 5 that represent natural landscapes with catastrophic disturbance or some other modification to natural vegetation preventing accurate classification in the USNVC, and 3 that represent nonvegetated water (non-USNVC). Features were interpreted from viewing 4-band digital aerial imagery using digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems (GIS). (Digital aerial imagery was collected each fall during 2009–11 to capture leaf-phenology change of hardwood trees across the latitudinal range of APPA.) The interpreted data were digitally and spatially referenced, thus making the spatial-database layers usable in GIS. Polygon units were mapped to either a 0.5-hectare (ha) or 0.25-ha minimum mapping unit, depending on vegetation type or scenario; however, polygon units were mapped to 0.1 ha for alpine vegetation.A geodatabase containing various feature-class layers and tables provide locations and support data to USNVC vegetation types (vegetation map layer), vegetation plots, verification sites, validation sites, AA sites, project boundary extent and zones, and aerial image centers and flight lines. The feature-class layer and related tables of the vegetation map layer provide 30,395 polygons of detailed attribute data covering 110,919.7 ha, with an average polygon size of 3.6 ha; the vegetation map coincides closely with the administrative boundary for APPA.Summary reports generated from the vegetation map layer of the map classes representing USNVC natural (including ruderal) vegetation types apply to 28,242 polygons (92.9% of polygons) and cover 106,413.0 ha (95.9%) of the map extent for APPA. The map layer indicates APPA to be 92.4% forest and woodland (102,480.8 ha), 1.7% shrubland (1866.3 ha), and 1.8% herbaceous cover (2,065.9 ha). Map classes representing park-special vegetation (undefined in the USNVC) apply to 58 polygons (0.2% of polygons) and cover 404.3 ha (0.4%) of the map extent. Map classes representing USNVC cultural types apply to 1,777 polygons (5.8% of polygons) and cover 2,516.3 ha (2.3%) of the map extent. Map classes representing nonvegetated water (non-USNVC) apply to 332 polygons (1.1% of polygons) and cover 1,586.2 ha (1.4%) of the map extent.
Circular dichroism in photoelectron images from aligned nitric oxide molecules
Sen, Ananya; Pratt, S. T.; Reid, K. L.
2017-05-03
We have used velocity map photoelectron imaging to study circular dichroism of the photoelectron angular distributions (PADs) of nitric oxide following two-color resonanceenhanced two-photon ionization via selected rotational levels of the A 2Σ +, v' = 0 state. By using a circularly polarized pump beam and a counter-propagating, circularly polarized probe beam, cylindrical symmetry is preserved in the ionization process, and the images can be reconstructed using standard algorithms. The VMI set up enables individual ion rotational states to be resolved with excellent collection efficiency, rendering the measurements considerably simpler to perform than previous measurements conducted with a conventional photoelectronmore » spectrometer. The results demonstrate that circular dichroism is observed even when cylindrical symmetry is maintained, and serve as a reminder that dichroism is a general feature of the multiphoton ionization of atoms and molecules. Furthermore, the observed PADs are in good agreement with calculations based on parameters extracted from previous experimental results obtained by using a time-offlight electron spectrometer.« less
Circular dichroism in photoelectron images from aligned nitric oxide molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ananya; Pratt, S. T.; Reid, K. L.
We have used velocity map photoelectron imaging to study circular dichroism of the photoelectron angular distributions (PADs) of nitric oxide following two-color resonanceenhanced two-photon ionization via selected rotational levels of the A 2Σ +, v' = 0 state. By using a circularly polarized pump beam and a counter-propagating, circularly polarized probe beam, cylindrical symmetry is preserved in the ionization process, and the images can be reconstructed using standard algorithms. The VMI set up enables individual ion rotational states to be resolved with excellent collection efficiency, rendering the measurements considerably simpler to perform than previous measurements conducted with a conventional photoelectronmore » spectrometer. The results demonstrate that circular dichroism is observed even when cylindrical symmetry is maintained, and serve as a reminder that dichroism is a general feature of the multiphoton ionization of atoms and molecules. Furthermore, the observed PADs are in good agreement with calculations based on parameters extracted from previous experimental results obtained by using a time-offlight electron spectrometer.« less
Automatically measuring the effect of strategy drawing features on pupils' handwriting and gender
NASA Astrophysics Data System (ADS)
Tabatabaey-Mashadi, Narges; Sudirman, Rubita; Guest, Richard M.; Khalid, Puspa Inayat
2013-12-01
Children's dynamic drawing strategies have been recently recognized as indicators of handwriting ability. However the influence of each feature in predicting handwriting is unknown due to lack of a measuring system. An automated measuring algorithm suitable for psychological assessment and non-subjective scoring is presented here. Using the weight vector and classification rate of a machine learning algorithm, an overall feature's effect is calculated which is comparable in different groupings. In this study thirteen previously detected drawing strategy features are measured for their influence on handwriting and gender. Features are extracted from drawing a triangle, Beery VMI and Bender Gestalt tangent patterns. Samples are related to 203 pupils (77 below average writers, and 101 female). The results show that the number of strokes in drawing the triangle pattern plays a major role in both groupings; however Left Tendency flag feature is affected by children's handwriting about 2.5 times greater than their gender. Experiments indicate that different forms of a feature sometimes show different influences.
Schmetz, Emilie; Rousselle, Laurence; Ballaz, Cécile; Detraux, Jean-Jacques; Barisnikov, Koviljka
2017-06-20
This study aims to examine the different levels of visual perceptual object recognition (early, intermediate, and late) defined in Humphreys and Riddoch's model as well as basic visual spatial processing in children using a new test battery (BEVPS). It focuses on the age sensitivity, internal coherence, theoretical validity, and convergent validity of this battery. French-speaking, typically developing children (n = 179; 5 to 14 years) were assessed using 15 new computerized subtests. After selecting the most age-sensitive tasks though ceiling effect and correlation analyses, an exploratory factorial analysis was run with the 12 remaining subtests to examine the BEVPS' theoretical validity. Three separate factors were identified for the assessment of the stimuli's basic features (F1, four subtests), view-dependent and -independent object representations (F2, six subtests), and basic visual spatial processing (F3, two subtests). Convergent validity analyses revealed positive correlations between F1 and F2 and the Beery-VMI visual perception subtest, while no such correlations were found for F3. Children's performances progressed until the age of 9-10 years in F1 and in view-independent representations (F2), and until 11-12 years in view-dependent representations (F2). However, no progression with age was observed in F3. Moreover, the selected subtests, present good-to-excellent internal consistency, which indicates that they provide reliable measures for the assessment of visual perceptual processing abilities in children.
NASA Astrophysics Data System (ADS)
Sutradhar, S.; Samanta, B. R.; Samanta, A. K.; Reisler, H.
2017-07-01
The 205-230 nm photodissociation of vibrationally excited CO2 at temperatures up to 1800 K was studied using Resonance Enhanced Multiphoton Ionization (REMPI) and time-sliced Velocity Map Imaging (VMI). CO2 molecules seeded in He were heated in an SiC tube attached to a pulsed valve and supersonically expanded to create a molecular beam of rotationally cooled but vibrationally hot CO2. Photodissociation was observed from vibrationally excited CO2 with internal energies up to about 20 000 cm-1, and CO(X1Σ+), O(3P), and O(1D) products were detected by REMPI. The large enhancement in the absorption cross section with increasing CO2 vibrational excitation made this investigation feasible. The internal energies of heated CO2 molecules that absorbed 230 nm radiation were estimated from the kinetic energy release (KER) distributions of CO(X1Σ+) products in v″ = 0. At 230 nm, CO2 needs to have at least 4000 cm-1 of rovibrational energy to absorb the UV radiation and produce CO(X1Σ+) + O(3P). CO2 internal energies in excess of 16 000 cm-1 were confirmed by observing O(1D) products. It is likely that initial absorption from levels with high bending excitation accesses both the A1B2 and B1A2 states, explaining the nearly isotropic angular distributions of the products. CO(X1Σ+) product internal energies were estimated from REMPI spectroscopy, and the KER distributions of the CO(X1Σ+), O(3P), and O(1D) products were obtained by VMI. The CO product internal energy distributions change with increasing CO2 temperature, suggesting that more than one dynamical pathway is involved when the internal energy of CO2 (and the corresponding available energy) increases. The KER distributions of O(1D) and O(3P) show broad internal energy distributions in the CO(X1Σ+) cofragment, extending up to the maximum allowed by energy but peaking at low KER values. Although not all the observations can be explained at this time, with the aid of available theoretical studies of CO2 VUV photodissociation and O + CO recombination, it is proposed that following UV absorption, the two lowest lying triplet states, a3B2 and b3A2, and the ground electronic state are involved in the dynamical pathways that lead to product formation.
Training Compliance Control Yields Improvements in Drawing as a Function of Beery Scores
Snapp-Childs, Winona; Flatters, Ian; Fath, Aaron; Mon-Williams, Mark; Bingham, Geoffrey P.
2014-01-01
Many children have difficulty producing movements well enough to improve in sensori-motor learning. Previously, we developed a training method that supports active movement generation to allow improvement at a 3D tracing task requiring good compliance control. Here, we tested 7–8 year old children from several 2nd grade classrooms to determine whether 3D tracing performance could be predicted using the Beery VMI. We also examined whether 3D tracing training lead to improvements in drawing. Baseline testing included Beery, a drawing task on a tablet computer, and 3D tracing. We found that baseline performance in 3D tracing and drawing co-varied with the visual perception (VP) component of the Beery. Differences in 3D tracing between children scoring low versus high on the Beery VP replicated differences previously found between children with and without motor impairments, as did post-training performance that eliminated these differences. Drawing improved as a result of training in the 3D tracing task. The training method improved drawing and reduced differences predicted by Beery scores. PMID:24651280
Comparative study of lung functions in women working in different fibre industries.
Khanam, F; Islam, N; Hai, M A
2008-07-01
A cross sectional work has been done on Bangladeshi females, working in different fibre industries, to study the effect of exposure to fibre dust on pulmonary functions. The ventilatory capacities were measured by VMI ventilometer in 653 apparently healthy women (160, 162 and 167 were jute, textile and garment industry workers, respectively). For the controls 164 females were recruited who never worked in any fibre industry. The observed FVC, FEV1 and PEFR were lower in all groups of fibre industry workers than those of the control. Among the industry workers, the jute mill workers had the lowest ventilatory capacities and garment industry workers had the highest values. The jute and textile mill workers had also significantly lower FEV1 and PEFR than those of garment industry workers. The FEV1 and PEFR were significantly lower in jute mill workers than those of textile ill workers. The low ventilatory capacities were almost proportionate with the length of service of the workers. Thus, the present study indicates that the fibre dust, on regular exposure for longer duration, may limit the lung functions.
Devlin, Angela M.; Chau, Cecil M. Y.; Matheson, Julie; McCarthy, Deanna; Yurko-Mauro, Karin; Innis, Sheila M.; Grunau, Ruth E.
2017-01-01
Little is known about arachidonic acid (ARA) and docosahexaenoic acid (DHA) requirements in toddlers. A longitudinal, double blind, controlled trial in toddlers (n = 133) age 13.4 ± 0.9 months (mean ± standard deviation), randomized to receive a DHA (200 mg/day) and ARA (200 mg/day) supplement (supplement) or a corn oil supplement (control) until age 24 months determined effects on neurodevelopment. We found no effect of the supplement on the Bayley Scales of Infant and Toddler Development 3rd Edition (Bayley-III) cognitive and language composites and Beery–Buktenica Developmental Test of Visual–Motor Integration (Beery VMI) at age 24 months. Supplemented toddlers had higher RBC phosphatidylcholine (PC), phosphatidylethanolamine (PE), and plasma DHA and ARA compared to placebo toddlers at age 24 months. A positive relationship between RBC PE ARA and Bayley III Cognitive composite (4.55 (0.21–9.00), B (95% CI), p = 0.045) in supplemented boys, but not in control boys, was observed in models adjusted for baseline fatty acid, maternal non-verbal intelligence, and BMI z-score at age 24 months. A similar positive relationship between RBC PE ARA and Bayley III Language composite was observed for supplemented boys (11.52 (5.10–17.94), p < 0.001) and girls (11.19 (4.69–17.68), p = 0.001). These findings suggest that increasing the ARA status in toddlers is associated with better neurodevelopment at age 24 months. PMID:28878181
Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D
2018-06-22
Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.
Lee, Yumi; Song, Sang Hwa; Cheong, Taesu
2018-01-01
In this paper, we examine a real-world case related to the consumer product supply chain to analyze the value of supply chain coordination under the condition of moral hazard. Because of the characteristics of a buyback contract scheme employed in the supply chain, the supplier company's sales department encourages retailers to order more inventory to meet their sales target, whereas retailers pay less attention to their inventory level and leftovers at the end of the season. This condition induces moral hazard problems in the operation of the supply chain, as suppliers suffer from huge returns of leftover inventory. This, in turn, is related to the obsolescence of returned inventory, even with penalty terms in the contract for the return of any leftovers. In this study, we show under the current buyback-based supply chain operation, the inventory levels of both the supplier and retailers exceed customer demand and develop vendor-managed inventory (VMI) system with base stock policy to remove any mismatch of supply and demand. A comparison of both systems shows that through the proper coordination of supply chain operations, both suppliers and retailers can gain additional benefits while providing proper services to end customers.
Alignment of the hydrogen molecule under intense laser fields
Lopez, Gary V.; Fournier, Martin; Jankunas, Justin; ...
2017-06-01
Alignment, dissociation and ionization of H 2 molecules in the ground or the electronically excited E,F state of the H 2 molecule are studied and contrasted using the Velocity Mapping Imaging (VMI) technique. Photoelectron images from nonresonant 7-, 8- and 9-photon radiation ionization of H 2 show that the intense laser fields create ponderomotive shifts in the potential energy surfaces and distort the velocity of the emitted electrons that are produced from ionization. Photofragment images of H+ due to the dissociation mechanism that follows the 2-photon excitation into the (E,F; v = 0, J = 0, 1) electronic state showmore » a strong dependence on laser intensity, which is attributed to the high polarizability of the H 2 (E,F) state. For transitions from the J = 0 state, particularly, we observe marked structure in the angular distribution, which we explain as the interference between the prepared J = 0 and Stark-mixed J = 2 rovibrational states of H 2, as the laser intensity increases. Quantification of these effects allows us to extract the molecular polarizability of the H 2 (E,F) state, and yields a value of 103 ± 37 A.U.« less
Electron Anisotropy as a Signature of Mode Specific Isomerization in Vinylidene
NASA Astrophysics Data System (ADS)
Gibson, Stephen T.; Laws, Benjamin A.; Mabbs, Richard; Neumark, Daniel; Lineberger, Carl; Field, Robert W.
2016-06-01
he nature of the isomerization process that turns vinylidene into acetylene has been awaiting advances in experimental methods, to better define fractionation widths beyond those available in the seminal 1989 photoelectron spectrum measurement. This has proven a challenge. The technique of velocity-map imaging (VMI) is one avenue of approach. Images of electrons photodetached from vinylidene negative-ions, at various wavelengths, 1064 nm shown, provide more detail, including unassigned structure, but only an incremental improvement in the instrument line width. Intriguingly, the VMIs demonstrate a mode dependent variation in the electron anisotropy. Most notable in the figure, the inner-ring transition clusters are discontinuously, more isotropic. Electron anisotropy may provide an alternative key to examine the character of vinylidene transitions, mediating the necessity for an extreme resolution measurement. Vibrational dependent anisotropy has previously been observed in diatomic photoelectron spectra, associated with the coupling of electronic and nuclear motions. Research supported by the Australian Research Council Discovery Project Grant DP160102585. K. M. Ervin, J. Ho, and W. C. Lineberger, J. Chem. Phys. 91, 5974 (1989). doi:10.1063/1.457415 M. van Duzor et al. J. Chem. Phys. 133, 174311 (2010). doi:10.1063/1.3493349
Alignment of the hydrogen molecule under intense laser fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Gary V.; Fournier, Martin; Jankunas, Justin
Alignment, dissociation and ionization of H 2 molecules in the ground or the electronically excited E,F state of the H 2 molecule are studied and contrasted using the Velocity Mapping Imaging (VMI) technique. Photoelectron images from nonresonant 7-, 8- and 9-photon radiation ionization of H 2 show that the intense laser fields create ponderomotive shifts in the potential energy surfaces and distort the velocity of the emitted electrons that are produced from ionization. Photofragment images of H+ due to the dissociation mechanism that follows the 2-photon excitation into the (E,F; v = 0, J = 0, 1) electronic state showmore » a strong dependence on laser intensity, which is attributed to the high polarizability of the H 2 (E,F) state. For transitions from the J = 0 state, particularly, we observe marked structure in the angular distribution, which we explain as the interference between the prepared J = 0 and Stark-mixed J = 2 rovibrational states of H 2, as the laser intensity increases. Quantification of these effects allows us to extract the molecular polarizability of the H 2 (E,F) state, and yields a value of 103 ± 37 A.U.« less
A Cooperative Approach to Virtual Machine Based Fault Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton III, Thomas J; Engelmann, Christian; Vallee, Geoffroy R
Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM).more » We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.« less
Wu, Xia; Tan, Kai; Tang, Zichao; Lu, Xin
2014-03-14
We have combined photoelectron velocity-map imaging (VMI) spectroscopy and theoretical calculations to elucidate the geometry and energy properties of Aux(-)(Solv)n clusters with x = 1, 2; n = 1, 2; and Solv = H2O and CH3OH. Besides the blue-shifted vertical electron detachment energies (VDEs) of the complexes Au1,2(-)(Solv)n with the increase of the solvation number (n), we independently probed two distinct Au(-)(CH3OH)2 isomers, which combined with MP2/aug-cc-pVTZ(pp) calculations represent a competition between O···H-O hydrogen bonds (HBs) and Au···H-O nonconventional hydrogen bonds (NHBs). Complementary calculations provide the total binding energies of the low-energy isomers. Moreover, the relationship between the total binding energies and total VDEshift is discussed. We found that the Au1,2(-) anions exhibit halide-analogous behavior in microsolvation. These findings also demonstrate that photoelectron velocity map imaging spectroscopy with the aid of the ab initio calculations is an effective tool for investigating weak-interaction complexes.
Lee, Yumi; Song, Sang Hwa
2018-01-01
In this paper, we examine a real-world case related to the consumer product supply chain to analyze the value of supply chain coordination under the condition of moral hazard. Because of the characteristics of a buyback contract scheme employed in the supply chain, the supplier company’s sales department encourages retailers to order more inventory to meet their sales target, whereas retailers pay less attention to their inventory level and leftovers at the end of the season. This condition induces moral hazard problems in the operation of the supply chain, as suppliers suffer from huge returns of leftover inventory. This, in turn, is related to the obsolescence of returned inventory, even with penalty terms in the contract for the return of any leftovers. In this study, we show under the current buyback-based supply chain operation, the inventory levels of both the supplier and retailers exceed customer demand and develop vendor-managed inventory (VMI) system with base stock policy to remove any mismatch of supply and demand. A comparison of both systems shows that through the proper coordination of supply chain operations, both suppliers and retailers can gain additional benefits while providing proper services to end customers. PMID:29547625
Howe, Tsu-Hsin; Chen, Hao-Ling; Lee, Candy Chieh; Chen, Ying-Dar; Wang, Tien-Ni
2017-10-01
Visual perceptual motor skills have been proposed as underlying courses of handwriting difficulties. However, there is no evaluation tool currently available to assess these skills comprehensively and to serve as a sensitive measure. The purpose of this study was to validate the Computerized Perceptual Motor Skills Assessment (CPMSA), a newly developed evaluation tool for children in early elementary grades. Its test-retest reliability, concurrent validity, discriminant validity, and responsiveness were examined in 43 typically developing children and 26 children with handwriting difficulty. The CPMSA demonstrated excellent reliability across all subtests with intra-class correlation coefficients (ICCs)≥0.80. Significant moderate correlations between the domains of the CPMSA and corresponding gold standards including Beery VMI, the TVPS-3, and the eye-hand coordination subtest of the DTVP-2 demonstrated good concurrent validity. In addition, the CPMSA showed evidence of discriminant validity in samples of children with and without handwriting difficulty. This article provides evidence in support of the CPMSA. The CPMSA is a reliable, valid, and promising measure of visual perceptual motor skills for children in early elementary grades. Directions for future study and improvements to the assessment are discussed. Copyright © 2017. Published by Elsevier Ltd.
Fast track surgery: a clinical audit.
Carter, Jonathan; Szabo, Rebecca; Sim, Wee Wee; Pather, Selvan; Philp, Shannon; Nattress, Kath; Cotterell, Stephen; Patel, Pinki; Dalrymple, Chris
2010-04-01
Fast track surgery is a concept that utilises a variety of techniques to reduce the surgical stress response, allowing a shortened length of stay, improved outcomes and decreased time to full recovery. To evaluate a peri-operative Fast Track Surgical Protocol (FTSP) in patients referred for abdominal surgery. All patients undergoing a laparotomy over a 12-month period were entered prospectively on a clinical database. Data were retrospectively analysed. Over the study period, 72 patients underwent a laparotomy. Average patient age was 54 years and average weight and BMI were 67.2 kg and 26 respectively. Sixty three (88%) patients had a vertical midline incision (VMI). There were no intraoperative blood transfusions. The median length of stay (LOS) was 3.0 days. Thirty eight patients (53%) were discharged on or before post op day 3, seven (10%) of whom were discharged on postoperative day 2. On stepwise regression analysis, the following were found to be independently associated with reduced LOS: able to tolerate early enteral nutrition, good performance status, use of COX inhibitor and transverse incision. In comparison with colleagues at the SGOG not undertaking FTS for their patients, the authors' LOS was lower and the RANZCOG modified Quality Indicators (QI's) did not demonstrate excess morbidity. Patients undergoing fast track surgery can be discharged from hospital with a reduced LOS, without an increased readmission rate and with comparative outcomes to non-fast tracked patients.
NASA Astrophysics Data System (ADS)
Bacellar, C.; Ziemkiewicz, M. P.; Leone, S. R.; Neumark, D. M.; Gessner, O.
2015-05-01
Superfluid helium nanodroplets provide a unique cryogenic matrix for high resolution spectroscopy and ultracold chemistry applications. With increasing photon energy and, in particular, in the increasingly important Extreme Ultraviolet (EUV) regime, the droplets become optically dense and, therefore, participate in the EUV-induced dynamics. Energy- and charge-transfer mechanisms between the host droplets and dopant atoms, however, are poorly understood. Static energy domain measurements of helium droplets doped with noble gas atoms (Xe, Kr) indicate that Penning ionization due to energy transfer from the excited droplet to dopant atoms may be a significant relaxation channel. We have set up a femtosecond time-resolved photoelectron imaging experiment to probe these dynamics directly in the time-domain. Droplets containing 104 to 106 helium atoms and a small percentage (<10-4) of dopant atoms (Xe, Kr, Ne) are excited to the 1s2p Rydberg band by 21.6 eV photons produced by high harmonic generation (HHG). Transiently populated states are probed by 1.6 eV photons, generating time-dependent photoelectron kinetic energy distributions, which are monitored by velocity map imaging (VMI). The results will provide new information about the dynamic timescales and the different relaxation channels, giving access to a more complete physical picture of solvent-solute interactions in the superfluid environment. Prospects and challenges of the novel experiment as well as preliminary experimental results will be discussed.
Contributions of Executive Function and Spatial Skills to Preschool Mathematics Achievement
Verdine, Brian N.; Irwin, Casey M.; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn
2014-01-01
Early mathematics achievement is highly predictive of later mathematics performance. Here we investigate the influence of executive function (EF) and spatial skills, two generalizable skills often overlooked in mathematics curricula, on mathematics performance in preschoolers. Children (N = 44) of varying socio-economic status (SES) levels were assessed at age three on a new assessment of spatial skill (Test of Spatial Assembly, TOSA) and a vocabulary measure (the PPVT-4). The same children were tested at age four on the Beery Test of Visual-Motor Integration (VMI), as well as measures of EF, and mathematics. The TOSA was created specifically as an assessment for 3-year-olds, allowing the investigation of links between spatial, EF, and mathematical skills earlier than previously possible. Results of a hierarchical regression indicate that EF and spatial skills predict 70% of the variance in mathematics performance without an explicit math test, EF is an important predictor of math performance as prior research suggested, and spatial skills uniquely predict 27% of the variance in mathematics skills. Additional research is needed to understand if EF is truly malleable and whether EF and spatial skills may be leveraged to support early mathematics skills, especially for lower-SES children who are already falling behind in these skill areas by ages 3 and 4. These findings indicate that both skills are part of an important foundation for mathematics performance and may represent pathways for improving school readiness for mathematics. PMID:24874186
Roscioli, Joseph R; Nesbitt, David J
2011-01-01
The dynamics of HCI scattering from a room-temperature -CH3 terminated self-assembled monolayer (SAM) is probed via state-resolved spectroscopy coupled to a velocity-map imaging (VMI) apparatus. The resulting velocity maps provide new insight into the HCl scattering trajectories, revealing for the first time correlations between internal and translational degrees of freedom. Velocity maps at low J are dominated by signatures of both the incident beam (17.3(3) kcal mol(-1)) and a room-temperature trapping-desorption component (TD). At high J, however, the maps contain a large, continuous feature associated primarily with impulsive scattering (IS). Trajectories resulting from these strongly inelastic interactions are readily isolated in the map, and provide a new glimpse into purely impulsive scattering dynamics. Specifically, within the purely-IS HCI region of the velocity maps, the rotational distribution is found to be remarkably Boltzmann like, but with a temperature (472 K) significantly higher than the SAM surface (300 K). By way of contrast, the translational degree of freedom of the impulsively-scattered flux is clearly non-Boltzmann in character, with a strong propensity for in-plane scattering in the forward direction, and yet still exhibiting out-of-plane velocity distributions reasonably well characterized by a temperature of 690 K. These first data establish the prospects for a new class of experimental tools aimed at exploring energy transfer and reactive scattering events on SAMs, liquid, and metal interfaces with quantum state resolved information on correlated internal and translational distributions.
Contributions of executive function and spatial skills to preschool mathematics achievement.
Verdine, Brian N; Irwin, Casey M; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn
2014-10-01
Early mathematics achievement is highly predictive of later mathematics performance. Here we investigated the influence of executive function (EF) and spatial skills, two generalizable skills often overlooked in mathematics curricula, on mathematics performance in preschoolers. Children (N=44) of varying socioeconomic status (SES) levels were assessed at 3 years of age on a new assessment of spatial skill (Test of Spatial Assembly, TOSA) and a vocabulary measure (Peabody Picture Vocabulary Test, PPVT). The same children were tested at 4 years of age on the Beery Test of Visual-Motor Integration (VMI) as well as on measures of EF and mathematics. The TOSA was created specifically as an assessment for 3-year-olds, allowing the investigation of links among spatial, EF, and mathematical skills earlier than previously possible. Results of a hierarchical regression indicate that EF and spatial skills predict 70% of the variance in mathematics performance without an explicit math test, EF is an important predictor of math performance as prior research suggested, and spatial skills uniquely predict 27% of the variance in mathematics skills. Additional research is needed to understand whether EF is truly malleable and whether EF and spatial skills may be leveraged to support early mathematics skills, especially for lower SES children who are already falling behind in these skill areas by 3 and 4 years of age. These findings indicate that both skills are part of an important foundation for mathematics performance and may represent pathways for improving school readiness for mathematics. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V. Litvinyuk, and Itzik Ben-Itzhak
Our principal goal was the experimental demonstration of Laser-Induced Electron Diffraction (LIED). Key steps along the development of this experimental technique have been accomplished and reported in the publications listed in this brief report. We started with measuring 3D electron momenta spectra in aligned nitrogen and oxygen molecules. Chakra Maharjan (Ph.D. student of Lew Cocke) was a lead researcher on this project. Although Chakra succeeded in obtaining those spectra, we were scooped by the publication of identical results in Science by the NRC Ottawa group. Our results were never published as a refereed article, but became a part of Chakra'smore » Ph.D. dissertation. That Science paper was the first experimental demonstration of Laser-Induced Electron Diffraction (LIED). Chakra also worked on wavelength dependence of 3D ATI spectra of atoms and molecules using tunable OPA pulses. Another Ph.D. student, Maia Magrakvelidze (her GRA was funded by the grant), started working on COLTRIMS experiments using OPA pulses (1800 nm wavelength). After some initial experiments it became apparent that COLTRIMS did not yield sufficient count rates of electrons in the high-energy part of the spectrum to see diffraction signatures with acceptable statistics (unfavorable scaling of the electron yield with laser wavelength was partly to blame). Nevertheless, Maia managed to use COLTRIMS and OPA to measure the angular dependence of the tunneling ionization rate in D{sub 2} molecules. Following the initial trial experiments, the decision was made to switch from COLTRIMS to VMI in order to increase the count rates by a factor of {approx}100, which may have given us a chance to see LIED. Research Associate Dr. Sankar De (his salary was funded by the grant), in collaboration with Matthias Kling's group (then at MPQ Garching), proceeded to design a special multi-electrode VMI spectrometer for capturing high-energy ATI electrons and to install it in place of COLTRIMS inside our experimental chamber. That apparatus was later used for the first demonstration of field-free orientation in CO using two-color laser pulses as well as for a series of other experiments, such as pump-probe studies of molecular dynamics with few-cycle laser pulses, control of electron localization in dissociating hydrogen molecules using two-color laser pulses, and ATI spectra of Xe ionized by two-color laser pulses. In parallel, Dipanwita Ray (Ph.D. student of Lew Cocke) worked on measuring angle-resolved ATI spectra of noble gases using a stereo-ATI phasemeter as a TOF electron spectrometer. She observed the angular diffraction structures in 3D ATI spectra of Ar, Kr and Xe, which were interpreted in terms of the Quantitative Rescattering theory newly developed by C.D. Lin. We also attempted to use a much more powerful OPA (five times more energy per pulse than the one we had at JRML) available at the Advanced Laser Light Source (ALLS) in Montreal to observe LIED. Two visits to ALLS by the PI, Igor Litvinyuk, and one visit by the PI's Ph.D. student (Irina Bocharova) were funded by the grant. Though we failed to observe LIED (the repetition rate of the ALLS OPA was too low at only 100 Hz), this international collaboration resulted in several publications on other related subjects, such as the wavelength dependence of laser Coulomb explosion of hydrogen, the wavelength dependence of non-sequential double ionization of neon and argon, the demonstration of charge-resonance enhanced ionization in CO{sub 2}, and the study of non-elastic scattering processes in H{sub 2}. Theoretical efforts to account for the hydrogen Coulomb explosion experiment resulted in another paper by Maia Magrakvelidze as lead author. Although for various reasons we failed to achieve our main goal of observing LIED, we salute the recent success in this endeavor by Lou DiMauro's group (with theoretical support from our KSU colleague C.D. Lin) published in Nature, which validates our approach.« less
Quantum State-Resolved Reactive and Inelastic Scattering at Gas-Liquid and Gas-Solid Interfaces
NASA Astrophysics Data System (ADS)
Grütter, Monika; Nelson, Daniel J.; Nesbitt, David J.
2012-06-01
Quantum state-resolved reactive and inelastic scattering at gas-liquid and gas-solid interfaces has become a research field of considerable interest in recent years. The collision and reaction dynamics of internally cold gas beams from liquid or solid surfaces is governed by two main processes, impulsive scattering (IS), where the incident particles scatter in a few-collisions environment from the surface, and trapping-desorption (TD), where full equilibration to the surface temperature (T{TD}≈ T{s}) occurs prior to the particles' return to the gas phase. Impulsive scattering events, on the other hand, result in significant rotational, and to a lesser extent vibrational, excitation of the scattered molecules, which can be well-described by a Boltzmann-distribution at a temperature (T{IS}>>T{s}). The quantum-state resolved detection used here allows the disentanglement of the rotational, vibrational, and translational degrees of freedom of the scattered molecules. The two examples discussed are (i) reactive scattering of monoatomic fluorine from room-temperature ionic liquids (RTILs) and (ii) inelastic scattering of benzene from a heated (˜500 K) gold surface. In the former experiment, rovibrational states of the nascent HF beam are detected using direct infrared absorption spectroscopy, and in the latter, a resonace-enhanced multi-photon-ionization (REMPI) scheme is employed in combination with a velocity-map imaging (VMI) device, which allows the detection of different vibrational states of benzene excited during the scattering process. M. E. Saecker, S. T. Govoni, D. V. Kowalski, M. E. King and G. M. Nathanson Science 252, 1421, 1991. A. M. Zolot, W. W. Harper, B. G. Perkins, P. J. Dagdigian and D. J. Nesbitt J. Chem. Phys 125, 021101, 2006. J. R. Roscioli and D. J. Nesbitt Faraday Disc. 150, 471, 2011.
Swift, Andrew J.; Wild, Jim M.; Nagle, Scott K.; Roldán-Alzate, Alejandro; François, Christopher J.; Fain, Sean; Johnson, Kevin; Capener, Dave; van Beek, Edwin J. R.; Kiely, David G.; Wang, Kang; Schiebler, Mark L.
2014-01-01
Pulmonary hypertension (PH) is a condition of varied aetiology, commonly associated with a poor clinical outcome. Patients are categorised on the basis of pathophysiological, clinical, radiological and therapeutic similarities. Pulmonary arterial hypertension (PAH) is often diagnosed late in its disease course with outcome dependent on aetiology, disease severity and response to treatment. Recent advances in quantitative MR imaging allow for a better initial characterization and measurement of the morphologic and flow related changes that accompany the response of the heart-lung axis to prolonged elevation of pulmonary arterial pressure and resistance and provide a reproducible, comprehensive and non-invasive means of assessing the course of the disease and response to treatment. Typical features of pulmonary arterial hypertension (PAH) occur primarily as a result of increased pulmonary vascular resistance and resultant increased RV afterload. Several MRI derived diagnostic markers have emerged, such as ventricular mass index (VMI), interventricular septal configuration and average pulmonary artery velocity having reported diagnostic accuracy similar to Doppler echocardiography. Furthermore, prognostic markers have been identified with independent predictive value for identification of treatment failure. Such markers include: large right ventricular end-diastolic volume index (RVEDVI), low left ventricular end diastolic volume index (LVEDVI), low right ventricular ejection fraction (RVEF) and relative area change of the pulmonary trunk. MRI is ideally suited to longitudinal follow-up of patients with PAH due to its non-invasive nature, high reproducibility and has the advantage over other biomarkers in PAH due to its sensitivity to change in morphological, functional and flow related parameters. Further study the role of MR imaging as a biomarker in the clinical environment is warranted. PMID:24552882
Training children aged 5-10 years in manual compliance control to improve drawing and handwriting.
Bingham, Geoffrey P; Snapp-Childs, Winona
2018-04-12
A large proportion of school-aged children exhibit poor drawing and handwriting. This prevalence limits the availability of therapy. We developed an automated method for training improved manual compliance control and relatedly, prospective control of a stylus. The approach included a difficult training task, while providing parametrically modifiable support that enables the children to perform successfully while developing good compliance control. The task was to use a stylus to push a bead along a 3D wire path. Support was provided by making the wire magnetically attractive to the stylus. Support was progressively reduced as 3D tracing performance improved. We report studies that (1) compared performance of Typically Developing (TD) children and children with Developmental Coordination Disorder (DCD), (2) tested training with active versus passive movement, (3) tested progressively reduced versus constant or no support during training, (4) tested children of different ages, (5) tested the transfer of training to a drawing task, (6) tested the specificity of training in respect to the size, shape and dimensionality of figures, and (7) investigated the relevance of the training task to the Beery VMI, an inventory used to diagnose DCD. The findings were as follows. (1) Pre-training performance of TD and DCD children was the same and good with high support but distinct and poor with low support. Support yielded good self-efficacy that motivated training. Post training performance with no support was improved and the same for TD and DCD children. (2) Actively controlled movements were required for improved performance. (3) Progressively reduced support was required for good performance during and after training. (4) Age differences in performance during pre-training were eliminated post-training. (5) Improvements transferred to drawing. (6) There was no evidence of specificity of training in transfer. (7) Disparate Beery scores were reflected in pre-training but not post-training performance. We conclude that the method improves manual compliance control, and more generally, prospective control of movements used in drawing performance. Copyright © 2018. Published by Elsevier B.V.
Yavelberg, Loren; Zaharieva, Dessi; Cinar, Ali; Riddell, Michael C; Jamnik, Veronica
2018-05-01
The increasing popularity of wearable technology necessitates the evaluation of their accuracy to differentiate physical activity (PA) intensities. These devices may play an integral role in customizing PA interventions for primary prevention and secondary management of chronic diseases. For example, in persons with type 1 diabetes (T1D), PA greatly affects glucose concentrations depending on the intensity, mode (ie, aerobic, anaerobic, mixed), and duration. This variability in glucose responses underscores the importance of implementing dependable wearable technology in emerging avenues such as artificial pancreas systems. Participants completed three 40-minute, dynamic non-steady-state exercise sessions, while outfitted with multiple research (Fitmate, Metria, Bioharness) and consumer (Garmin, Fitbit) grade wearables. The data were extracted according to the devices' maximum sensitivity (eg, breath by breath, beat to beat, or minute time stamps) and averaged into minute-by-minute data. The variables of interest, heart rate (HR), breathing frequency, and energy expenditure (EE), were compared to validated criterion measures. Compared to deriving EE by laboratory indirect calorimetry standard, the Metria activity patch overestimates EE during light-to-moderate PA intensities (L-MI) and moderate-to-vigorous PA intensities (M-VI) (mean ± SD) (0.28 ± 1.62 kilocalories· minute -1 , P < .001, 0.64 ± 1.65 kilocalories· minute -1 , P < .001, respectively). The Metria underestimates EE during vigorous-to-maximal PA intensity (V-MI) (-1.78 ± 2.77 kilocalories · minute -1 , P < .001). Similarly, compared to Polar HR monitor, the Bioharness underestimates HR at L-MI (-1 ± 8 bpm, P < .001) and M-VI (5 ± 11 bpm, P < .001), respectively. A significant difference in EE was observed for the Garmin device, compared to the Fitmate ( P < .001) during continuous L-MI activity. Overall, our study demonstrates that current research-grade wearable technologies operate within a ~10% error for both HR and EE during a wide range of dynamic exercise intensities. This level of accuracy for emerging research-grade instruments is considered both clinically and practically acceptable for research-based or consumer use. In conclusion, research-grade wearable technology that uses EE kilocalories · minute -1 and HR reliably differentiates PA intensities.
SUPPLEMENTARY COMPARISON: Final report on Supplementary Comparison APMP.M.H-S1
NASA Astrophysics Data System (ADS)
Kongkavitool, Rugkanawan; Hattori, Koichiro; Sanh, Vo; Yen, Lim Gin
2007-01-01
This report presents the results of supplementary comparison APMP.M.H-S1 among four national metrology institutes (NIMT, NMIJ/AIST, VMI and SPRING). The comparison was carried out during October 2004 to January 2005 in order to determine the capability of the primary Rockwell hardness standard, including standard conditions, of each participant, to confirm the accuracy of Rockwell hardness scale C measurement declared by the participant, which includes the effect of each participant's primary indenter and determine the degrees of equivalence of hardness scale measurement in the range 20 HRC to 60 HRC. Furthermore, the comparison was carried out a by common indenter, which was provided by the pilot institute, in order to determine the measurement capability of the participant's primary machine without the influence of the indenter, as a study of scientific purpose. The pilot institute was the National Institute of Metrology (Thailand), NIMT. There were two sets of artifacts for the comparison. Each set was composed of nine hardness blocks: 20 HRC, 25 HRC, 30 HRC, 35 HRC, 40 HRC, 45 HRC, 50 HRC, 55 HRC, 60 HRC. The verification of the participant's primary Rockwell hardness machine was carried out according to ISO6508-3 before making the measurement. The pilot institute made measurements at the beginning and the end of the comparison in order to monitor the stability of the artifacts. The degree of equivalence of each national primary hardness standard was expressed quantitatively by two terms, the deviation from KCRV and the uncertainty of this deviation at a 95% level of confidence. The En parameter was calculated to express the equivalence between the measurements of participants as well. The degree of equivalence between pairs of participating institutes was expressed by the difference of their deviations from the key comparison reference value and the uncertainty of this difference at the 95% level of confidence. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Norranim, Uthai; Nguyen, Mong Kim; Ballico, Mark J.
2007-01-01
Industrial thermometers such as industrial platinum resistance thermometers (iprts) and liquid-in-glass thermometers (LIGTs) are widely used in industry. Because Key Comparisons are limited to direct realizations of ITS-90, and not all APMP NMIs have participated in them, the national metrology institutes (NMIs) of Thailand and Australia (NIMT and NIMA) organized an APMP supplementary comparison to support the approval of CMCs (calibration and measurement capabilities) for these laboratories. The comparison, performed in 2003, covered the range from -40.0 °C to 250.0 °C, using IPRTs (Hart Scientific 5626-12-S), total immersion (ASTM 62C, 120C) and partial immersion (ASTM 40C) LIGTs. Ten NMIs from the APMP: KIM-LIPI (Indonesia), ITDI (Philippines), MSL (New Zealand), NBSM (Nepal), NMIA (Australia), NIMT (Thailand), SCL (Hong Kong), SIRIM (Malaysia), SPRING (Singapore) and VMI (Vietnam) were divided into two loops to shorten the circulation time, and these were linked by the two pilot laboratories. This report describes details of the artifacts, the circulation schedule, the measurement procedures, the results submitted by participants, uncertainties and the analysis of the results. Reference values calculated using simple mean, median and weighted mean were consistent with each other, but as the Birge criterion was satisfied, the weighted mean with its lower uncertainty was adopted. The artifacts were found to be stable over the comparison and the results of the loop linking labs consistent, allowing an uncertainty of 2 mK to 4 mK to be achieved for the IPRT reference value and 10 mK to 20 mK for the LIGT reference values. These uncertainties allowed the comparison data to be used to adequately test the uncertainties of all the participant laboratories, and hence to directly support their CMC claims. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Final report of the APMP water flow key comparison: APMP.M.FF-K1
NASA Astrophysics Data System (ADS)
Lee, Kwang-Bock; Chun, Sejong; Terao, Yoshiya; Thai, Nguyen Hong; Tsair Yang, Cheng; Tao, Meng; Gutkin, Mikhail B.
2011-01-01
The key comparison, APMP.M.FF-K1, was undertaken by APMP/TCFF, the Technical Committee for Fluid Flow (TCFF) under the Asia Pacific Metrology Program (APMP). One objective of the key comparison was to demonstrate the degree of equivalence among six participating laboratories (KRISS, NMIJ, VMI, CMS, NIM and VNIIM) in water flow rate metrology by comparing the results with the key comparison reference value (KCRV) determined from the CCM.FF-K1 key comparison. The other objective of this key comparison was to provide supporting evidence for the calibration and measurement capabilities (CMCs), which had been declared by the participating laboratories during this key comparison. The Transfer Standard Package (TSP) was a Coriolis mass flowmeter, which had been used in the CCM.FF-K1 key comparison. Because the K-factors in the APMP.M.FF-K1 key comparison were slightly lower than the K-factors of the CCM.FF-K1 key comparison due to long-term drifts of the TSP, a correction value D was introduced. The value of D was given by a weighted sum between two link laboratories (NMIJ and KRISS), which participated in both the CCM.FF-K1 and the APMP.M.FF-K1 key comparisons. By this correction, the K-factors were laid between 12.004 and 12.017 at either low (Re = 254 000) or high (Re = 561 000) flow rates. Most of the calibration data were within expected uncertainty bounds. However, some data showed undulations, which gave large fluctuations of the metering factor at Re = 561 000. Calculation of degrees of equivalence showed that all the participating laboratories had deviations between -0.009 and 0.007 pulses/kg from the CCM.FF-K1 KCRV at either the low or the high flow rates. In case of En calculation, all the participating laboratories showed values less than 1, indicating that the corrected K-factors of all the laboratories were equivalent with the KCRV at both Re = 254 000 and 561 000. When the corrected K-factors from two participating laboratories were compared, all the numbers of equivalence showed values less than 1, indicating equivalence. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Dissociation energy and dynamics of water clusters
NASA Astrophysics Data System (ADS)
Ch'ng, Lee Chiat
The state-to-state vibrational predissociation (VP) dynamics of water clusters were studied following excitation of a vibrational mode of each cluster. Velocity-map imaging (VMI) and resonance-enhanced multiphoton ionization (REMPI) were used to determine pair-correlated center-of-mass translational energy distributions. Product energy distributions and dissociation energies were determined. Following vibrational excitation of the HCl stretch fundamental of the HCl-H2O dimer, HCl fragments were detected by 2 + 1 REMPI via the f 3□2(nu' = 0) ← X 1Sigma+(nu'' = 0) and V1Sigma + (nu' = 11 and 12) ← X1Sigma+ (nu'' = 0) transitions. REMPI spectra clearly show HCl from dissociation produced in the ground vibrational state with J'' up to 11. The fragments' center-of-mass translational energy distributions were determined from images of selected rotational states of HCl and were converted to rotational state distributions of the water cofragment. All the distributions could be fit well when using a dimer dissociation energy of bond dissociation energy D0 = 1334 +/- 10 cm--1. The rotational distributions in the water cofragment pair-correlated with specific rotational states of HCl appear nonstatistical when compared to predictions of the statistical phase space theory. A detailed analysis of pair-correlated state distributions was complicated by the large number of water rotational states available, but the data show that the water rotational populations increase with decreasing translational energy. H2O fragments of this dimer were detected by 2 + 1 REMPI via the C˜1B1(000) ← X˜1A1(000) transition. REMPI clearly shows that H2O from dissociation is produced in the ground vibrational state. The fragment's center-of-mass translational energy distributions were determined from images of selected rotational states of H2O and were converted to rotational state distributions of the HCl cofragment. The distributions gave D0 = 1334 +/- 10 cm --1 and show a clear preference for rotational levels in the HCl fragment that minimize translational energy release. The usefulness of 2 + 1 REMPI detection of water fragment is discussed. The hydrogen bonding in water is dominated by pair-wise dimer interactions, and the predissociation of the water dimer following vibrational excitation is reported. The measured D0 values of (H 2O)2 and (D2O)2, 1105 and 1244 +/- 10 cm--1, respectively, are in excellent agreement with the calculated values of 1103 and 1244 +/- 5 cm--1. Pair-correlated water fragment rovibrational state distributions following vibrational predissociation of (H2O)2 and (D2O) 2 were obtained upon excitation of the hydrogen bonded OH and OD stretch fundamentals, respectively. Quasiclassical trajectory calculations, using an accurate full-dimensional potential energy surface, are in accord with and help to elucidate experiment. Experiment and theory find predominant excitation of the fragment bending mode upon hydrogen bond breaking. A minor channel is also observed in which both fragments are in the ground vibrational state and are highly rotationally excited. The theoretical calculations reveal equal probability of bending excitation in the donor and acceptor subunits, which is a result of interchange of donor and acceptor roles. The rotational distributions associated with the major channel, in which one water fragment has one quantum of bend, and the minor channel with both water fragments in the ground vibrational state are calculated, and are in agreement with experiment. (Abstract shortened by UMI.)
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.
NASA Technical Reports Server (NTRS)
Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.
1995-01-01
This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.
ERIC Educational Resources Information Center
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects
ERIC Educational Resources Information Center
Easley, J. A., Jr.
1977-01-01
The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
ERIC Educational Resources Information Center
Thelen, Mark H.; And Others
1977-01-01
Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Literature review of models on tire-pavement interaction noise
NASA Astrophysics Data System (ADS)
Li, Tan; Burdisso, Ricardo; Sandu, Corina
2018-04-01
Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Airborne Wireless Communication Modeling and Analysis with MATLAB
2014-03-27
research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7 2.7. Propagation Modeling : Statistical Models ............................................................8 2.8. Antenna Modeling
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.
Jenness, Samuel M; Goodreau, Steven M; Morris, Martina
2018-04-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks
Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina
2018-01-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
NASA Astrophysics Data System (ADS)
Justi, Rosária S.; Gilbert, John K.
2002-04-01
In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.
Aspinall, Richard
2004-08-01
This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
2006-03-01
models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Evolution of computational models in BioModels Database and the Physiome Model Repository.
Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar
2018-04-12
A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Computational Models for Calcium-Mediated Astrocyte Functions.
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Computational Models for Calcium-Mediated Astrocyte Functions
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Energy modeling. Volume 2: Inventory and details of state energy models
NASA Astrophysics Data System (ADS)
Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.
1981-05-01
An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
[A review on research of land surface water and heat fluxes].
Sun, Rui; Liu, Changming
2003-03-01
Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Understanding and Predicting Urban Propagation Losses
2009-09-01
6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Comparison of dark energy models after Planck 2015
NASA Astrophysics Data System (ADS)
Xu, Yue-Yao; Zhang, Xin
2016-11-01
We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.
2013-12-01
Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
NASA Astrophysics Data System (ADS)
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Hedenstierna, Sofia; Halldin, Peter
2008-04-15
A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Replicating Health Economic Models: Firm Foundations or a House of Cards?
Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee
2017-11-01
Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Oursland, Mark David
This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Models in Science Education: Applications of Models in Learning and Teaching Science
ERIC Educational Resources Information Center
Ornek, Funda
2008-01-01
In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Analysis of terahertz dielectric properties of pork tissue
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Xie, Qiaoling; Sun, Ping
2017-10-01
Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-06-01
The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.
Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers
NASA Astrophysics Data System (ADS)
Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.
2017-12-01
Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Harrington
2004-10-25
The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu
2016-11-18
The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.
Evaluation of chiller modeling approaches and their usability for fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Downscaling GISS ModelE Boreal Summer Climate over Africa
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew
2015-01-01
The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle
2011-06-01
Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Comparison of childbirth care models in public hospitals, Brazil.
Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos
2014-04-01
To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Modeling approaches in avian conservation and the role of field biologists
Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.
2006-01-01
This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.
NASA Astrophysics Data System (ADS)
Rossman, Nathan R.; Zlotnik, Vitaly A.
2013-09-01
Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
ERIC Educational Resources Information Center
Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.
2011-01-01
The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
CSR Model Implementation from School Stakeholder Perspectives
ERIC Educational Resources Information Center
Herrmann, Suzannah
2006-01-01
Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
The cerebro-cerebellum: Could it be loci of forward models?
Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji
2016-03-01
It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Mechanical model development of rolling bearing-rotor systems: A review
NASA Astrophysics Data System (ADS)
Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng
2018-03-01
The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.
NASA Astrophysics Data System (ADS)
Gouvea, Julia; Passmore, Cynthia
2017-03-01
The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Modeling of near-wall turbulence
NASA Technical Reports Server (NTRS)
Shih, T. H.; Mansour, N. N.
1990-01-01
An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
An Immuno-epidemiological Model of Paratuberculosis
NASA Astrophysics Data System (ADS)
Martcheva, M.
2011-11-01
The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
The NASA Marshall engineering thermosphere model
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Modeling for Battery Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
A toy terrestrial carbon flow model
NASA Technical Reports Server (NTRS)
Parton, William J.; Running, Steven W.; Walker, Brian
1992-01-01
A generalized carbon flow model for the major terrestrial ecosystems of the world is reported. The model is a simplification of the Century model and the Forest-Biogeochemical model. Topics covered include plant production, decomposition and nutrient cycling, biomes, the utility of the carbon flow model for predicting carbon dynamics under global change, and possible applications to state-and-transition models and environmentally driven global vegetation models.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S.
2016-01-01
Need to Assess the Skill of Ecosystem Models Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. Northeast US Atlantis Marine Ecosystem Model We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. Skill Assessment Is Both Possible and Advisable We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment). PMID:26731540
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
NASA Astrophysics Data System (ADS)
Duane, G. S.; Selten, F.
2016-12-01
Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S
2016-01-01
Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment).
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post / VISION | About EMC EMC > Mesoscale Modeling > MODELS Home Mission Models R & D Collaborators Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING SREF
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
ERIC Educational Resources Information Center
King, Gillian; Currie, Melissa; Smith, Linda; Servais, Michelle; McDougall, Janette
2008-01-01
A framework of operating models for interdisciplinary research programs in clinical service organizations is presented, consisting of a "clinician-researcher" skill development model, a program evaluation model, a researcher-led knowledge generation model, and a knowledge conduit model. Together, these models comprise a tailored, collaborative…
Modelling Students' Visualisation of Chemical Reaction
ERIC Educational Resources Information Center
Cheng, Maurice M. W.; Gilbert, John K.
2017-01-01
This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Planning Major Curricular Change.
ERIC Educational Resources Information Center
Kirkland, Travis P.
Decision-making and change models can take many forms. One researcher (Nordvall, 1982) has suggested five conceptual models for introducing change: a political model; a rational decision-making model; a social interaction decision model; the problem-solving method; and an adaptive/linkage model which is an amalgam of each of the other models.…
UNITED STATES METEOROLOGICAL DATA - DAILY AND HOURLY FILES TO SUPPORT PREDICTIVE EXPOSURE MODELING
ORD numerical models for pesticide exposure include a model of spray drift (AgDisp), a cropland pesticide persistence model (PRZM), a surface water exposure model (EXAMS), and a model of fish bioaccumulation (BASS). A unified climatological database for these models has been asse...
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less
NASA Astrophysics Data System (ADS)
Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.
2014-12-01
The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
A toolbox and record for scientific models
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.
Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A
2017-08-01
Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
Documenting Models for Interoperability and Reusability (proceedings)
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Documenting Models for Interoperability and Reusability
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
Resident Role Modeling: "It Just Happens".
Sternszus, Robert; Macdonald, Mary Ellen; Steinert, Yvonne
2016-03-01
Role modeling by staff physicians is a significant component of the clinical teaching of students and residents. However, the importance of resident role modeling has only recently emerged, and residents' understanding of themselves as role models has yet to be explored. This study sought to understand residents' perceptions of themselves as role models, describe how residents learn about role modeling, and identify ways to improve resident role modeling. Fourteen semistructured interviews were conducted with residents in internal medicine, general surgery, and pediatrics at the McGill University Faculty of Medicine between April and September 2013. Interviews were audio-recorded and subsequently transcribed for analysis; iterative analysis followed principles of qualitative description. Four primary themes were identified through data analysis: residents perceived role modeling as the demonstration of "good" behaviors in the clinical context; residents believed that learning from their role modeling "just happens" as long as learners are "watching"; residents did not equate role modeling with being a role model; and residents learned about role modeling from watching their positive and negative role models. While residents were aware that students and junior colleagues learned from their modeling, they were often not aware of role modeling as it was occurring; they also believed that learning from role modeling "just happens" and did not always see themselves as role models. Helping residents view effective role modeling as a deliberate process rather than something that "just happens" may improve clinical teaching across the continuum of medical education.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar
2016-02-15
Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.
Experiments in concept modeling for radiographic image reports.
Bell, D S; Pattison-Gordon, E; Greenes, R A
1994-01-01
OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Model and Interoperability using Meta Data Annotations
NASA Astrophysics Data System (ADS)
David, O.
2011-12-01
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Alcan, Toros; Ceylanoğlu, Cenk; Baysal, Bekir
2009-01-01
To investigate the effects of different storage periods of alginate impressions on digital model accuracy. A total of 105 impressions were taken from a master model with three different brands of alginates and were poured into stone models in five different storage periods. In all, 21 stone models were poured and immediately were scanned, and 21 digital models were prepared. The remaining 84 impressions were poured after 1, 2, 3, and 4 days, respectively. Five linear measurements were made by three researchers on the master model, the stone models, and the digital models. Time-dependent deformation of alginate impressions at different storage periods and the accuracy of traditional stone models and digital models were evaluated separately. Both the stone models and the digital models were highly correlated with the master model. Significant deformities in the alginate impressions were noted at different storage periods of 1 to 4 days. Alginate impressions of different brands also showed significant differences between each other on the first, third, and fourth days. Digital orthodontic models are as reliable as traditional stone models and probably will become the standard for orthodontic clinical use. Storing alginate impressions in sealed plastic bags for up to 4 days caused statistically significant deformation of alginate impressions, but the magnitude of these deformations did not appear to be clinically relevant and had no adverse effect on digital modeling.
Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu
2011-04-01
A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3
Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram
2017-01-01
Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
Comment on ``Glassy Potts model: A disordered Potts model without a ferromagnetic phase''
NASA Astrophysics Data System (ADS)
Carlucci, Domenico M.
1999-10-01
We report the equivalence of the ``glassy Potts model,'' recently introduced by Marinari et al. and the ``chiral Potts model'' investigated by Nishimori and Stephen. Both models do not exhibit any spontaneous magnetization at low temperature, differently from the ordinary glass Potts model. The phase transition of the glassy Potts model is easily interpreted as the spin-glass transition of the ordinary random Potts model.
NASA Astrophysics Data System (ADS)
Cannizzo, John K.
2017-01-01
We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.
A novel microfluidic model can mimic organ-specific metastasis of circulating tumor cells.
Kong, Jing; Luo, Yong; Jin, Dong; An, Fan; Zhang, Wenyuan; Liu, Lilu; Li, Jiao; Fang, Shimeng; Li, Xiaojie; Yang, Xuesong; Lin, Bingcheng; Liu, Tingjiao
2016-11-29
A biomimetic microsystem might compensate costly and time-consuming animal metastatic models. Herein we developed a biomimetic microfluidic model to study cancer metastasis. Primary cells isolated from different organs were cultured on the microlfuidic model to represent individual organs. Breast and salivary gland cancer cells were driven to flow over primary cell culture chambers, mimicking dynamic adhesion of circulating tumor cells (CTCs) to endothelium in vivo. These flowing artificial CTCs showed different metastatic potentials to lung on the microfluidic model. The traditional nude mouse model of lung metastasis was performed to investigate the physiological similarity of the microfluidic model to animal models. It was found that the metastatic potential of different cancer cells assessed by the microfluidic model was in agreement with that assessed by the nude mouse model. Furthermore, it was demonstrated that the metastatic inhibitor AMD3100 inhibited lung metastasis effectively in both the microfluidic model and the nude mouse model. Then the microfluidic model was used to mimick liver and bone metastasis of CTCs and confirm the potential for research of multiple-organ metastasis. Thus, the metastasis of CTCs to different organs was reconstituted on the microfluidic model. It may expand the capabilities of traditional cell culture models, providing a low-cost, time-saving, and rapid alternative to animal models.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Mutant mice: experimental organisms as materialised models in biomedicine.
Huber, Lara; Keuck, Lara K
2013-09-01
Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
ASTP ranging system mathematical model
NASA Technical Reports Server (NTRS)
Ellis, M. R.; Robinson, L. H.
1973-01-01
A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-01-01
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-02-08
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Keating; W.Statham
2004-02-12
The purpose of this model report is to provide documentation of the conceptual and mathematical model (ASHPLUME) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. The ASHPLUME conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through the Yucca Mountain repository and downwind transport of contaminated tephra. The ASHPLUME mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the groundmore » surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report will improve and clarify the previous documentation of the ASHPLUME mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model.« less
Model-Based Reasoning in Upper-division Lab Courses
NASA Astrophysics Data System (ADS)
Lewandowski, Heather
2015-05-01
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from AMO physics include everything from the Bohr model of the hydrogen atom to the Bose-Hubbard model of interacting bosons in a lattice. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling ``black boxes,'' and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
2013-01-01
Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557
Model Selection in Systems Biology Depends on Experimental Design
Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.
2014-01-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
A Two-Zone Multigrid Model for SI Engine Combustion Simulation Using Detailed Chemistry
Ge, Hai-Wen; Juneja, Harmit; Shi, Yu; ...
2010-01-01
An efficient multigrid (MG) model was implemented for spark-ignited (SI) engine combustion modeling using detailed chemistry. The model is designed to be coupled with a level-set-G-equation model for flame propagation (GAMUT combustion model) for highly efficient engine simulation. The model was explored for a gasoline direct-injection SI engine with knocking combustion. The numerical results using the MG model were compared with the results of the original GAMUT combustion model. A simpler one-zone MG model was found to be unable to reproduce the results of the original GAMUT model. However, a two-zone MG model, which treats the burned and unburned regionsmore » separately, was found to provide much better accuracy and efficiency than the one-zone MG model. Without loss in accuracy, an order of magnitude speedup was achieved in terms of CPU and wall times. To reproduce the results of the original GAMUT combustion model, either a low searching level or a procedure to exclude high-temperature computational cells from the grouping should be applied to the unburned region, which was found to be more sensitive to the combustion model details.« less
Statistical considerations on prognostic models for glioma
Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.
2016-01-01
Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Nguyen, Daniel; Dao, Tung; Trinh, Khanh
2013-01-01
This paper presents a coupled vortex-lattice flight dynamic model with an aeroelastic finite-element model to predict dynamic characteristics of a flexible wing transport aircraft. The aircraft model is based on NASA Generic Transport Model (GTM) with representative mass and stiffness properties to achieve a wing tip deflection about twice that of a conventional transport aircraft (10% versus 5%). This flexible wing transport aircraft is referred to as an Elastically Shaped Aircraft Concept (ESAC) which is equipped with a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for active wing shaping control for drag reduction. A vortex-lattice aerodynamic model of the ESAC is developed and is coupled with an aeroelastic finite-element model via an automated geometry modeler. This coupled model is used to compute static and dynamic aeroelastic solutions. The deflection information from the finite-element model and the vortex-lattice model is used to compute unsteady contributions to the aerodynamic force and moment coefficients. A coupled aeroelastic-longitudinal flight dynamic model is developed by coupling the finite-element model with the rigid-body flight dynamic model of the GTM.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
A Logical Account of Diagnosis with Multiple Theories
NASA Technical Reports Server (NTRS)
Pandurang, P.; Lum, Henry Jr. (Technical Monitor)
1994-01-01
Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
USDA-ARS?s Scientific Manuscript database
To improve climate change impact estimates, multi-model ensembles (MMEs) have been suggested. MMEs enable quantifying model uncertainty, and their medians are more accurate than that of any single model when compared with observations. However, multi-model ensembles are costly to execute, so model i...
A Comparative Analysis on Models of Higher Education Massification
ERIC Educational Resources Information Center
Pan, Maoyuan; Luo, Dan
2008-01-01
Four financial models of massification of higher education are discussed in this essay. They are American model, Western European model, Southeast Asian and Latin American model and the transition countries model. The comparison of the four models comes to the conclusion that taking advantage of nongovernmental funding is fundamental to dealing…
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
Frank R., III Thompson
2009-01-01
Habitat models are widely used in bird conservation planning to assess current habitat or populations and to evaluate management alternatives. These models include species-habitat matrix or database models, habitat suitability models, and statistical models that predict abundance. While extremely useful, these approaches have some limitations.
ERIC Educational Resources Information Center
Cheng, Meng-Fei; Lin, Jang-Long
2015-01-01
Understanding the nature of models and engaging in modeling practice have been emphasized in science education. However, few studies discuss the relationships between students' views of scientific models and their ability to develop those models. Hence, this study explores the relationship between students' views of scientific models and their…
Integrated research in constitutive modelling at elevated temperatures, part 2
NASA Technical Reports Server (NTRS)
Haisler, W. E.; Allen, D. H.
1986-01-01
Four current viscoplastic models are compared experimentally with Inconel 718 at 1100 F. A series of tests were performed to create a sufficient data base from which to evaluate material constants. The models used include Bodner's anisotropic model; Krieg, Swearengen, and Rhode's model; Schmidt and Miller's model; and Walker's exponential model.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Astrophysics Data System (ADS)
Jang, S.; Moon, Y.; Na, H.
2012-12-01
We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.
Modeling of the radiation belt megnetosphere in decisional timeframes
Koller, Josef; Reeves, Geoffrey D; Friedel, Reiner H.W.
2013-04-23
Systems and methods for calculating L* in the magnetosphere with essentially the same accuracy as with a physics based model at many times the speed by developing a surrogate trained to be a surrogate for the physics-based model. The trained model can then beneficially process input data falling within the training range of the surrogate model. The surrogate model can be a feedforward neural network and the physics-based model can be the TSK03 model. Operatively, the surrogate model can use parameters on which the physics-based model was based, and/or spatial data for the location where L* is to be calculated. Surrogate models should be provided for each of a plurality of pitch angles. Accordingly, a surrogate model having a closed drift shell can be used from the plurality of models. The feedforward neural network can have a plurality of input-layer units, there being at least one input-layer unit for each physics-based model parameter, a plurality of hidden layer units and at least one output unit for the value of L*.
Cowell, Rosemary A; Bussey, Timothy J; Saksida, Lisa M
2012-11-01
We describe how computational models can be useful to cognitive and behavioral neuroscience, and discuss some guidelines for deciding whether a model is useful. We emphasize that because instantiating a cognitive theory as a computational model requires specification of an explicit mechanism for the function in question, it often produces clear and novel behavioral predictions to guide empirical research. However, computational modeling in cognitive and behavioral neuroscience remains somewhat rare, perhaps because of misconceptions concerning the use of computational models (in particular, connectionist models) in these fields. We highlight some common misconceptions, each of which relates to an aspect of computational models: the problem space of the model, the level of biological organization at which the model is formulated, and the importance (or not) of biological plausibility, parsimony, and model parameters. Careful consideration of these aspects of a model by empiricists, along with careful delineation of them by modelers, may facilitate communication between the two disciplines and promote the use of computational models for guiding cognitive and behavioral experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ting, Eric; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents a static aeroelastic model and longitudinal trim model for the analysis of a flexible wing transport aircraft. The static aeroelastic model is built using a structural model based on finite-element modeling and coupled to an aerodynamic model that uses vortex-lattice solution. An automatic geometry generation tool is used to close the loop between the structural and aerodynamic models. The aeroelastic model is extended for the development of a three degree-of-freedom longitudinal trim model for an aircraft with flexible wings. The resulting flexible aircraft longitudinal trim model is used to simultaneously compute the static aeroelastic shape for the aircraft model and the longitudinal state inputs to maintain an aircraft trim state. The framework is applied to an aircraft model based on the NASA Generic Transport Model (GTM) with wing structures allowed to flexibly deformed referred to as the Elastically Shaped Aircraft Concept (ESAC). The ESAC wing mass and stiffness properties are based on a baseline "stiff" values representative of current generation transport aircraft.
Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara
2016-05-09
A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.