Sample records for nns computing facility

  1. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  2. Novice nurse productivity following workplace bullying.

    PubMed

    Berry, Peggy A; Gillespie, Gordon L; Gates, Donna; Schafer, John

    2012-03-01

    To determine the prevalence and effects of workplace bullying (WPB) on the work productivity of novice nurses (NNs). Internet-based descriptive cross-sectional survey design. One hundred ninety seven NNs (91.4% female, 8.6% male) in practice less than 2 years completed the Healthcare Productivity Survey, Negative Acts Questionnaire, and a demographic survey. The majority (72.6%, n= 147) of NNs reported a WPB event within the previous month, with 57.9% (n= 114) the direct targets and another 14.7% (n= 29) witnesses of WPB behaviors. Using a weighted Negative Acts Questionnaire score, 21.3% (n= 43) of NNs were bullied daily over a 6-month period. When asked if bullied over the past 6 months, approximately 44.7% (n= 88) of NNs reported repeated, targeted WPB, with 55.3% (n= 109) reporting no WPB. WPB acts were primarily perpetrated by more experienced nursing colleagues (63%, n= 126). Further, work productivity regression modeling was significant and NN productivity was negatively impacted by workplace bullying (r=-.322, p= .045). WPB continues in the healthcare environment and negatively affects bullied NNs' productivity by affecting cognitive demands and ability to handle or manage their workload. Healthcare facilities should continue to measure WPB in the work environment after policy implementation as well as eliminate negative behaviors through root-cause analysis to correct environmental factors associated with WPB. © 2012 Sigma Theta Tau International.

  3. Individualized treatment of craniovertebral junction malformation guided by intraoperative computed tomography.

    PubMed

    Li, Lianfeng; Wang, Peng; Chen, LiFeng; Ma, Xiaodong; Bu, Bo; Yu, Xinguang

    2012-04-01

    This study was designed to report our preliminary experience of intraoperative computed tomography (iCT) using a mobile scanner with integrated neuronavigation system (NNS). The objective of this study was to assess the feasibility and potential utility of iCT with integrated NNS in individualized treatment of craniovertebral junction malformation (CVJM). The surgical management of congenital craniovertebral anomalies is complex due to the relative difficulty in accessing the region, critical relationships of neurovascular structures, and the intricate biomechanical issues involved. We reported our first 19 complex CVJM cases including 11 male and 8 female patients from January, 2009 to June, 2009 (mean age, 33.9 y; age range, 13 to 58 y). A sliding gantry 40-slice CT scanner was installed in a preexisting operating room. Image data was transferred directly from the scanner into the NNS using an automated registration system. We applied this technology to transoral odontoidectomy in 17 patients. Moreover, with the extra help of iCT integrated with NNS, odontoidectomy through posterior midline approach, and transoral atlantal lateral mass resection were, for the first time, performed for treatment of complex CVJM. NNS was found to correlate well with the intraoperative findings, and the recalibration was uneven in all cases with an accuracy of 1.6 mm (1.6: 1.2 to 2.0). All patients were clinically evaluated by Nurick grade criteria, and neurological deficits were monitored after 3 months of surgery. Fifteen patients (79%) were improved by at least 1 Nurick grade, whereas the grade did not change in 4 patients (21%). iCT scanning with integrated NNS was both feasible and beneficial for the surgical management of complex CVJM. In this unusual patient population, the technique seemed to be valuable in negotiating complex anatomy and achieving a safe and predictable decompression.

  4. Role of neural networks for avionics

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher L.; DeYong, Mark R.; Eskridge, Thomas C.

    1995-08-01

    Neural network (NN) architectures provide a thousand-fold speed-up in computational power per watt along with the flexibility to learn/adapt so as to reduce software life-cycle costs. Thus NNs are posed to provide a key supporting role to meet the avionics upgrade challenge for affordable improved mission capability especially near hardware where flexible and powerful smart processing is needed. This paper summarizes the trends for air combat and the resulting avionics needs. A paradigm for information fusion and response management is then described from which viewpoint the role for NNs as a complimentary technology in meeting these avionics challenges is explained along with the key obstacles for NNs.

  5. Feasibility of MHD submarine propulsion. Phase II, MHD propulsion: Testing in a two Tesla test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  6. Feasibility of MHD submarine propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doss, E.D.; Sikes, W.C.

    1992-09-01

    This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less

  7. Design of a monitor and simulation terminal (master) for space station telerobotics and telescience

    NASA Technical Reports Server (NTRS)

    Lopez, L.; Konkel, C.; Harmon, P.; King, S.

    1989-01-01

    Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.

  8. Corrective Feedback via Instant Messenger Learning Activities in NS-NNS and NNS-NNS Dyads

    ERIC Educational Resources Information Center

    Sotillo, Susana

    2005-01-01

    This exploratory study examines corrective feedback in native speaker-nonnative speaker (NS-NNS) and NNS-NNS dyads while participants were engaged in communicative and problem-solving activities via "Yahoo! Instant Messenger" (YIM). As "negotiation of meaning" studies of the 1990s have shown, linguistic items which learners negotiate in…

  9. Text-Based Negotiated Interaction of NNS-NNS and NNS-NS Dyads on Facebook

    ERIC Educational Resources Information Center

    Liu, Sarah Hsueh-Jui

    2017-01-01

    This study sought to determine the difference in text-based negotiated interaction between non-native speakers of English (NNS-NNS) and between non-native and natives (NNS-NS) in terms of the frequency of negotiated instances, successfully resolved instances, and interactional strategy use when the dyads collaborated on Facebook. It involved 10…

  10. Characterization of Non-Nutritive Sweetener Intake in Rural Southwest Virginian Adults Living in a Health-Disparate Region.

    PubMed

    Hedrick, Valisa E; Passaro, Erin M; Davy, Brenda M; You, Wen; Zoellner, Jamie M

    2017-07-14

    Few data assessing non-nutritive sweetener (NNS) intake are available, especially within rural, health-disparate populations, where obesity and related co-morbidities are prevalent. The objective of this study is to characterize NNS intake for this population and examine the variance in demographics, cardio-metabolic outcomes, and dietary intake between NNS consumers and non-consumers. A cross-sectional sample ( n = 301) of Virginian adults from a randomized controlled trial (data collected from 2012 to 2014) targeting sugar-sweetened beverage (SSB) intake completed three 24-h dietary recalls, and demographics and cardio-metabolic measures were assessed. The frequency, types, and sources of NNS consumption were identified. Thirty-three percent of participants reported consuming NNS ( n = 100). Sucralose was the largest contributor of mean daily NNS intake by weight (mg), followed by aspartame, acesulfame potassium, and saccharin. NNS in tabletop sweeteners, diet tea, and diet soda were the top contributors to absolute NNS intake. The most frequently consumed NNS sources were diet sodas, juice drinks, and tabletop sweeteners. Although mean body mass index (BMI) was greater for NNS consumers, they demonstrated significantly lower food, beverage, and SSB caloric intake and energy density, and higher overall dietary quality. It remains unclear whether NNS use plays a role in exacerbating weight gain. NNS consumers in this sample may have switched from drinking predominantly SSB to drinking some NNS beverages in an effort to cope with weight gain. Future studies should explore motivations for NNS use across a variety of weight and health categories.

  11. Characterization of Non-Nutritive Sweetener Intake in Rural Southwest Virginian Adults Living in a Health-Disparate Region

    PubMed Central

    Passaro, Erin M.; Davy, Brenda M.; You, Wen; Zoellner, Jamie M.

    2017-01-01

    Few data assessing non-nutritive sweetener (NNS) intake are available, especially within rural, health-disparate populations, where obesity and related co-morbidities are prevalent. The objective of this study is to characterize NNS intake for this population and examine the variance in demographics, cardio-metabolic outcomes, and dietary intake between NNS consumers and non-consumers. A cross-sectional sample (n = 301) of Virginian adults from a randomized controlled trial (data collected from 2012 to 2014) targeting sugar-sweetened beverage (SSB) intake completed three 24-h dietary recalls, and demographics and cardio-metabolic measures were assessed. The frequency, types, and sources of NNS consumption were identified. Thirty-three percent of participants reported consuming NNS (n = 100). Sucralose was the largest contributor of mean daily NNS intake by weight (mg), followed by aspartame, acesulfame potassium, and saccharin. NNS in tabletop sweeteners, diet tea, and diet soda were the top contributors to absolute NNS intake. The most frequently consumed NNS sources were diet sodas, juice drinks, and tabletop sweeteners. Although mean body mass index (BMI) was greater for NNS consumers, they demonstrated significantly lower food, beverage, and SSB caloric intake and energy density, and higher overall dietary quality. It remains unclear whether NNS use plays a role in exacerbating weight gain. NNS consumers in this sample may have switched from drinking predominantly SSB to drinking some NNS beverages in an effort to cope with weight gain. Future studies should explore motivations for NNS use across a variety of weight and health categories. PMID:28708096

  12. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the user. The program has a memory requirement of about 900K. The standard distribution medium for this package is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. F77NNS was developed in 1989.

  13. A facile one-pot oxidation-assisted dealloying protocol to massively synthesize monolithic core-shell architectured nanoporous copper@cuprous oxide nanonetworks for photodegradation of methyl orange

    PubMed Central

    Liu, Wenbo; Chen, Long; Dong, Xin; Yan, Jiazhen; Li, Ning; Shi, Sanqiang; Zhang, Shichao

    2016-01-01

    In this report, a facile and effective one-pot oxidation-assisted dealloying protocol has been developed to massively synthesize monolithic core-shell architectured nanoporous copper@cuprous oxide nanonetworks (C-S NPC@Cu2O NNs) by chemical dealloying of melt-spun Al 37 at.% Cu alloy in an oxygen-rich alkaline solution at room temperature, which possesses superior photocatalytic activity towards photodegradation of methyl orange (MO). The experimental results show that the as-prepared nanocomposite exhibits an open, bicontinuous interpenetrating ligament-pore structure with length scales of 20 ± 5 nm, in which the ligaments comprising Cu and Cu2O are typical of core-shell architecture with uniform shell thickness of ca. 3.5 nm. The photodegradation experiments of C-S NPC@Cu2O NNs show their superior photocatalytic activities for the MO degradation under visible light irradiation with degradation rate as high as 6.67 mg min−1 gcat−1, which is a diffusion-controlled kinetic process in essence in light of the good linear correlation between photodegradation ratio and square root of irradiation time. The excellent photocatalytic activity can be ascribed to the synergistic effects between unique core-shell architecture and 3D nanoporous network with high specific surface area and fast mass transfer channel, indicating that the C-S NPC@Cu2O NNs will be a promising candidate for photocatalysts of MO degradation. PMID:27830720

  14. A facile one-pot oxidation-assisted dealloying protocol to massively synthesize monolithic core-shell architectured nanoporous copper@cuprous oxide nanonetworks for photodegradation of methyl orange

    NASA Astrophysics Data System (ADS)

    Liu, Wenbo; Chen, Long; Dong, Xin; Yan, Jiazhen; Li, Ning; Shi, Sanqiang; Zhang, Shichao

    2016-11-01

    In this report, a facile and effective one-pot oxidation-assisted dealloying protocol has been developed to massively synthesize monolithic core-shell architectured nanoporous copper@cuprous oxide nanonetworks (C-S NPC@Cu2O NNs) by chemical dealloying of melt-spun Al 37 at.% Cu alloy in an oxygen-rich alkaline solution at room temperature, which possesses superior photocatalytic activity towards photodegradation of methyl orange (MO). The experimental results show that the as-prepared nanocomposite exhibits an open, bicontinuous interpenetrating ligament-pore structure with length scales of 20 ± 5 nm, in which the ligaments comprising Cu and Cu2O are typical of core-shell architecture with uniform shell thickness of ca. 3.5 nm. The photodegradation experiments of C-S NPC@Cu2O NNs show their superior photocatalytic activities for the MO degradation under visible light irradiation with degradation rate as high as 6.67 mg min-1 gcat-1, which is a diffusion-controlled kinetic process in essence in light of the good linear correlation between photodegradation ratio and square root of irradiation time. The excellent photocatalytic activity can be ascribed to the synergistic effects between unique core-shell architecture and 3D nanoporous network with high specific surface area and fast mass transfer channel, indicating that the C-S NPC@Cu2O NNs will be a promising candidate for photocatalysts of MO degradation.

  15. Hierarchical chestnut-like MnCo2O4 nanoneedles grown on nickel foam as binder-free electrode for high energy density asymmetric supercapacitors

    NASA Astrophysics Data System (ADS)

    Hui, Kwun Nam; Hui, Kwan San; Tang, Zikang; Jadhav, V. V.; Xia, Qi Xun

    2016-10-01

    Hierarchical chestnut-like manganese cobalt oxide (MnCo2O4) nanoneedles (NNs) are successfully grown on nickel foam using a facile and cost-effective hydrothermal method. High resolution TEM image further verifies that the chestnut-like MnCo2O4 structure is assembled by numerous 1D MnCo2O4 nanoneedles, which are formed by numerous interconnected MnCo2O4 nanoparticles with grain diameter of ∼10 nm. The MnCo2O4 electrode exhibits high specific capacitance of 1535 F g-1 at 1 A g-1 and good rate capability (950 F g-1 at 10 A g-1) in a 6 M KOH electrolyte. An asymmetric supercapacitor is fabricated using MnCo2O4 NNs on Ni foam (MnCo2O4 NNs/NF) as the positive electrode and graphene/NF as the negative electrode. The device shows an operation voltage of 1.5 V and delivers a high energy density of ∼60.4 Wh kg-1 at a power density of ∼375 W kg-1. Moreover, the device exhibits an excellent cycling stability of 94.3% capacitance retention after 12000 cycles at 30 A g-1. This work demonstrates that hierarchical chestnut-like MnCo2O4 NNs could be a promising electrode for the high performance energy storage devices.

  16. Early-Life Exposure to Non-Nutritive Sweeteners and the Developmental Origins of Childhood Obesity: Global Evidence from Human and Rodent Studies.

    PubMed

    Archibald, Alyssa J; Dolinsky, Vernon W; Azad, Meghan B

    2018-02-10

    Non-nutritive sweeteners (NNS) are increasingly consumed by children and pregnant women around the world, yet their long-term health impact is unclear. Here, we review an emerging body of evidence suggesting that early-life exposure to NNS may adversely affect body composition and cardio-metabolic health. Some observational studies suggest that children consuming NNS are at increased risk for obesity-related outcomes; however, others find no association or provide evidence of confounding. Fewer studies have examined prenatal NNS exposure, with mixed results from different analytical approaches. There is a paucity of RCTs evaluating NNS in children, yielding inconsistent results that can be difficult to interpret due to study design limitations (e.g., choice of comparator, multifaceted interventions). The majority of this research has been conducted in high-income countries. Some rodent studies demonstrate adverse metabolic effects from NNS, but most have used extreme doses that are not relevant to humans, and few have distinguished prenatal from postnatal exposure. Most studies focus on synthetic NNS in beverages, with few examining plant-derived NNS or NNS in foods. Overall, there is limited and inconsistent evidence regarding the impact of early-life NNS exposure on the developmental programming of obesity and cardio-metabolic health. Further research and mechanistic studies are needed to elucidate these effects and inform dietary recommendations for expectant mothers and children worldwide.

  17. Effects of the Non-Nutritive Sweeteners on Glucose Metabolism and Appetite Regulating Hormones: Systematic Review of Observational Prospective Studies and Clinical Trials

    PubMed Central

    Romo-Romo, Alonso; Aguilar-Salinas, Carlos A.; Brito-Córdova, Griselda X.; Gómez Díaz, Rita A.; Vilchis Valentín, David

    2016-01-01

    Background The effects of non-nutritive sweeteners (NNS) on glucose metabolism and appetite regulating hormones are not clear. There is an ongoing debate concerning NNS use and deleterious changes in metabolism. Objectives The aim of this review is to analyze the scientific available evidence regarding the effects of NNS on glucose metabolism and appetite regulating hormones. Data Sources and Study Eligibility Criteria We identified human observational studies evaluating the relation between NNS consumption and obesity, diabetes, and metabolic syndrome, in addition to clinical trials evaluating the effects of NNS in glucose metabolism and appetite regulating hormones. Results Fourteen observational studies evaluating the association between NNS consumption and the development of metabolic diseases and twenty-eight clinical trials studying the effects of NNS on metabolism were included. Finally, two meta-analyses evaluating the association between the consumption of NNS-containing beverages and the development of type 2 diabetes were identified. Conclusions Some observational studies suggest an association between NNS consumption and development of metabolic diseases; however, adiposity is a confounder frequently found in observational studies. The effects of the NNS on glucose metabolism are not clear. The results of the identified clinical trials are contradictory and are not comparable because of the major existing differences between them. Studies evaluating specific NNS, with an adequate sample size, including a homogeneous study group, identifying significant comorbidities, with an appropriate control group, with an appropriate exposure time, and considering adjustment for confounder variables such as adiposity are needed. PMID:27537496

  18. Exploring Intercultural Interactions in Multicultural Contexts: Proposal and Research Suggestions.

    ERIC Educational Resources Information Center

    Yeh, Jung-huel Becky

    A discussion examines the importance of communication between non-native speakers (NNS/NNS), reviews relevant theories and issues in intercultural interactions and NNS/NNS interactions, and explores methodological issues in interpreting linguistic and interactional data. The intent is to explore features of communication between NNSs from…

  19. Nonnutritive Sweeteners in Breast Milk.

    PubMed

    Sylvetsky, Allison C; Gardner, Alexandra L; Bauman, Viviana; Blau, Jenny E; Garraffo, H Martin; Walter, Peter J; Rother, Kristina I

    2015-01-01

    Nonnutritive sweeteners (NNS), including saccharin, sucralose, aspartame, and acesulfame-potassium, are commonly consumed in the general population, and all except for saccharin are considered safe for use during pregnancy and lactation. Sucralose (Splenda) currently holds the majority of the NNS market share and is often combined with acesulfame-potassium in a wide variety of foods and beverages. To date, saccharin is the only NNS reported to be found in human breast milk after maternal consumption, while there is no apparent information on the other NNS. Breast milk samples were collected from 20 lactating volunteers, irrespective of their habitual NNS intake. Saccharin, sucralose, and acesulfame-potassium were present in 65% of participants' milk samples, whereas aspartame was not detected. These data indicate that NNS are frequently ingested by nursing infants, and thus prospective clinical studies are necessary to determine whether early NNS exposure via breast milk may have clinical implications.

  20. NONNUTRITIVE SWEETENERS IN BREAST MILK

    PubMed Central

    Sylvetsky, Allison C.; Gardner, Alexandra L.; Bauman, Viviana; Blau, Jenny E.; Garraffo, H. Martin; Walter, Peter J.; Rother, Kristina I.

    2017-01-01

    Nonnutritive sweeteners (NNS), including saccharin, sucralose, aspartame, and acesulfame-potassium, are commonly consumed in the general population, and all except for saccharin are considered safe for use during pregnancy and lactation. Sucralose (Splenda) currently holds the majority of the NNS market share and is often combined with acesulfame-potassium in a wide variety of foods and beverages. To date, saccharin is the only NNS reported to be found in human breast milk after maternal consumption, while there is no apparent information on the other NNS. Breast milk samples were collected from 20 lactating volunteers, irrespective of their habitual NNS intake. Saccharin, sucralose, and acesulfame-potassium were present in 65% of participants’ milk samples, whereas aspartame was not detected. These data indicate that NNS are frequently ingested by nursing infants, and thus prospective clinical studies are necessary to determine whether early NNS exposure via breast milk may have clinical implications. PMID:26267522

  1. The effects of water and non-nutritive sweetened beverages on weight loss and weight maintenance: A randomized clinical trial.

    PubMed

    Peters, John C; Beck, Jimikaye; Cardel, Michelle; Wyatt, Holly R; Foster, Gary D; Pan, Zhaoxing; Wojtanowski, Alexis C; Vander Veur, Stephanie S; Herring, Sharon J; Brill, Carrie; Hill, James O

    2016-02-01

    To evaluate the effects of water versus beverages sweetened with non-nutritive sweeteners (NNS) on body weight in subjects enrolled in a year-long behavioral weight loss treatment program. The study used a randomized equivalence design with NNS or water beverages as the main factor in a trial among 303 weight-stable people with overweight and obesity. All participants participated in a weight loss program plus assignment to consume 24 ounces (710 ml) of water or NNS beverages daily for 1 year. NNS and water treatments were non-equivalent, with NNS treatment showing greater weight loss at the end of 1 year. At 1 year subjects receiving water had maintained a 2.45 ± 5.59 kg weight loss while those receiving NNS beverages maintained a loss of 6.21 ± 7.65 kg (P < 0.001 for difference). Water and NNS beverages were not equivalent for weight loss and maintenance during a 1-year behavioral treatment program. NNS beverages were superior for weight loss and weight maintenance in a population consisting of regular users of NNS beverages who either maintained or discontinued consumption of these beverages and consumed water during a structured weight loss program. These results suggest that NNS beverages can be an effective tool for weight loss and maintenance within the context of a weight management program. © 2015 The Authors, Obesity published by Wiley Periodicals, Inc. on behalf of The Obesity Society (TOS).

  2. Non-Nutritive Sweeteners in the Packaged Food Supply-An Assessment across 4 Countries.

    PubMed

    Dunford, Elizabeth K; Taillie, Lindsey Smith; Miles, Donna R; Eyles, Helen; Tolentino-Mayo, Lizbeth; Ng, Shu Wen

    2018-02-24

    Increased interest among consumers in the reduction of dietary sugar intake has led to the wider availability of food products containing non-nutritive sweeteners (NNS). However, the extent to which NNS are currently being used by manufacturers to sweeten processed food and beverage products, and how NNS may be displacing added sugars as a sweetener is unknown. The current study utilized branded food composition databases from Australia, Mexico, New Zealand and the US to determine the percentage of processed food and beverage products for which there are nutrition data containing NNS and to compare total sugar density (g per 100 mL for beverages and g per 100 g for foods) between products with and without NNS. Ordinary least squares regression at the country-product level was performed to examine associations between presence of NNS and total sugar. Across all countries, 5% of products contained at least one NNS, with the highest prevalence among beverages (22%). Mexico had the highest percentage of products with NNS (11%), as compared to the United States (US) (4%), New Zealand (1%), and Australia (<1%). The presence of NNS was associated with lower mean total sugar density among beverages (range across countries: 7.5 to 8.7 g per 100 mL) and among foods (23.2 to 25.5 g per 100 g). Products with both added sugar ingredients and NNS had a lower overall mean total sugar density when compared to products containing only added sugar ingredients. Due to paucity of data on sales and market shares across these countries, our results do not reflect the extent to which consumers purchase NNS containing products. Continued monitoring of NNS in the food supply, extension of work from these data, and inclusion of market shares of products will be important as more countries introduce policies to reduce sugar.

  3. Non-Nutritive Sweeteners in the Packaged Food Supply—An Assessment across 4 Countries

    PubMed Central

    Taillie, Lindsey Smith; Eyles, Helen

    2018-01-01

    Increased interest among consumers in the reduction of dietary sugar intake has led to the wider availability of food products containing non-nutritive sweeteners (NNS). However, the extent to which NNS are currently being used by manufacturers to sweeten processed food and beverage products, and how NNS may be displacing added sugars as a sweetener is unknown. The current study utilized branded food composition databases from Australia, Mexico, New Zealand and the US to determine the percentage of processed food and beverage products for which there are nutrition data containing NNS and to compare total sugar density (g per 100 mL for beverages and g per 100 g for foods) between products with and without NNS. Ordinary least squares regression at the country-product level was performed to examine associations between presence of NNS and total sugar. Across all countries, 5% of products contained at least one NNS, with the highest prevalence among beverages (22%). Mexico had the highest percentage of products with NNS (11%), as compared to the United States (US) (4%), New Zealand (1%), and Australia (<1%). The presence of NNS was associated with lower mean total sugar density among beverages (range across countries: 7.5 to 8.7 g per 100 mL) and among foods (23.2 to 25.5 g per 100 g). Products with both added sugar ingredients and NNS had a lower overall mean total sugar density when compared to products containing only added sugar ingredients. Due to paucity of data on sales and market shares across these countries, our results do not reflect the extent to which consumers purchase NNS containing products. Continued monitoring of NNS in the food supply, extension of work from these data, and inclusion of market shares of products will be important as more countries introduce policies to reduce sugar. PMID:29495259

  4. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  5. [Prevalence of non-nutritive sweeteners consumption in a population of patients with diabetes in Mexico].

    PubMed

    Romo-Romo, Alonso; Almeda-Valdés, Paloma; Brito-Córdova, Griselda X; Gómez-Pérez, Francisco J

    2017-01-01

    To estimate the prevalence of non-nutritive sweeteners (NNS) consumption in a sample of patients with diabetes. We applied two questionnaires, one of food frequency adapted to products containing NNS and the other of beliefs related to NNS. The prevalence of NNS consumption was determined and correlated with the body mass index, energy and sugar consumption, waist circumference, glycated hemoglobin, triglycerides, diabetes type, education and socioeconomic status. The prevalence of NNS consumption was 96%; the consumption was greater in men and in patients with type 1 diabetes. A negative correlation was found between the consumption and age and a positive correlation with glycated hemoglobin and education. The prevalence of NNS consumption is high due to the great availability of products in the market.

  6. Intake of non-nutritive sweeteners is associated with an unhealthy lifestyle: a cross-sectional study in subjects with morbid obesity.

    PubMed

    Winther, Robert; Aasbrenn, Martin; Farup, Per G

    2017-01-01

    Subjects with morbid obesity commonly use Non-Nutritive Sweeteners (NNS), but the health-related effects of NNS have been questioned. The objectives of this study were to explore the associations between theuse of NNS and the health and lifestyle in subjects with morbid obesity. This cross-sectional study included subjects with morbid obesity (BMI ≥ 40 kg/m 2 or ≥35 kg/m 2 with obesity-related comorbidity). Information about demographics, physical and mental health, and dietary habits was collected, and a blood screen was taken. One unit of NNS was defined as 100 ml beverages with NNS or 2 tablets/units of NNS for coffee or tea. The associations between the intake of NNS and the health-related variables were analyzed with ordinal regression analyses adjusted for age, gender and BMI. One hundred subjects (women/men 83/17; mean age 44.3 years (SD 8.5)) were included. Median intake of NNS was 3.3 units (range 0 - 43). Intake of NNS was not associated with BMI ( p  = 0.64). The intake of NNS was associated with reduced heavy physical activity ( p  = 0.011), fatigue ( p  < 0.001), diarrhea ( p  = 0.009) and reduced well-being ( p  = 0.046); with increased intake of total energy ( p  = 0.003), fat ( p  = 0.013), carbohydrates ( p  = 0.002), sugar ( p  = 0.003) and salt ( p  = 0.001); and with reduced intake of the vitamins A ( p  = 0.001), C ( p  = 0.002) and D ( p  = 0.016). The use of NNS-containing beverages was associated with an unhealthy lifestyle, reduced physical and mental health and unfavourable dietary habits with increased energy intake including sugar, and reduced intake of some vitamins.

  7. Redox properties of the nitronyl nitroxide antioxidants studied via their reactions with nitroxyl and ferrocyanide.

    PubMed

    Bobko, A A; Khramtsov, V V

    2015-01-01

    Nitronyl nitroxides (NNs) are the paramagnetic probes that are capable of scavenging physiologically relevant reactive oxygen (ROS) and nitrogen (RNS) species, namely superoxide, nitric oxide (NO), and nitroxyl (HNO). NNs are increasingly considered as potent antioxidants and potential therapeutic agents. Understanding redox chemistry of the NNs is important for their use as antioxidants and as paramagnetic probes for discriminative detection of NO and HNO by electron paramagnetic resonance (EPR) spectroscopy. Here we investigated the redox properties of the two most commonly used NNs, including determination of the equilibrium and rate constants of their reduction by HNO and ferrocyanide, and reduction potential of the couple NN/hydroxylamine of nitronyl nitroxide (hNN). The rate constants of the reaction of the NNs with HNO were found to be equal to (1-2) × 10(4) M(-1)s(- 1) being close to the rate constants of scavenging superoxide and NO by NNs. The reduction potential of the NNs and iminonitroxides (INs, product of NNs reaction with NO) were calculated based on their reaction constants with ferrocyanide. The obtained values of the reduction potential for NN/hNN (E'0 ≈ 285 mV) and IN/hIN (E' ≈ 495 mV) are close to the corresponding values for vitamin C and vitamin E, correspondingly. The "balanced" scavenging rates of the NNs towards superoxide, NO, and HNO, and their low reduction potential being thermodynamically close to the bottom of the pecking order of oxidizing radicals, might be important factors contributing into their antioxidant activity.

  8. Glycemic impact of non-nutritive sweeteners: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Nichol, Alexander D; Holle, Maxwell J; An, Ruopeng

    2018-05-15

    Nonnutritive sweeteners (NNSs) are zero- or low-calorie alternatives to nutritive sweeteners, such as table sugars. A systematic review and meta-analysis of randomized controlled trials was conducted to quantitatively synthesize existing scientific evidence on the glycemic impact of NNSs. PubMed and Web of Science databases were searched. Two authors screened the titles and abstracts of candidate publications. The third author was consulted to resolve discrepancies. Twenty-nine randomized controlled trials, with a total of 741 participants, were included and their quality assessed. NNSs under examination included aspartame, saccharin, steviosides, and sucralose. The review followed the PRISMA guidelines. Meta-analysis was performed to estimate and track the trajectory of blood glucose concentrations over time after NNS consumption, and to test differential effects by type of NNS and participants' age, weight, and disease status. In comparison with the baseline, NNS consumption was not found to increase blood glucose level, and its concentration gradually declined over the course of observation following NNS consumption. The glycemic impact of NNS consumption did not differ by type of NNS but to some extent varied by participants' age, body weight, and diabetic status. NNS consumption was not found to elevate blood glucose level. Future studies are warranted to assess the health implications of frequent and chronic NNS consumption and elucidate the underlying biological mechanisms.

  9. Effects of a nonnutritive sweetener on body adiposity and energy metabolism in mice with diet-induced obesity.

    PubMed

    Mitsutomi, Kimihiko; Masaki, Takayuki; Shimasaki, Takanobu; Gotoh, Koro; Chiba, Seiichi; Kakuma, Tetsuya; Shibata, Hirotaka

    2014-01-01

    Nonnutritive sweeteners (NNSs) have been studied in terms of their potential roles in type 2 diabetes, obesity, and related metabolic disorders. Several studies have suggested that NNSs have several specific effects on metabolism such as reduced postprandial hyperglycemia and insulin resistance. However, the detailed effects of NNSs on body adiposity and energy metabolism have not been fully elucidated. We investigated the effects of an NNS on energy metabolism in mice with diet-induced obesity (DIO). DIO mice were divided into NNS-administered (4% NNS in drinking water), sucrose-administered (33% sucrose in drinking water), and control (normal water) groups. After supplementation for 4 weeks, metabolic parameters, including uncoupling protein (UCP) levels and energy expenditure, were assessed. Sucrose supplementation increased hyperglycemia, body adiposity, and body weight compared to the NNS-administered and control groups (P<0.05 for each). In addition, NNS supplementation decreased hyperglycemia compared to the sucrose-administered group (P<0.05). Interestingly, NNS supplementation increased body adiposity, which was accompanied by hyperinsulinemia, compared to controls (P<0.05 for each). NNS also increased leptin levels in white adipose tissue and triglyceride levels in tissues compared to controls (P<0.05 for each). Notably, compared to controls, NNS supplementation decreased the UCP1 level in brown adipose tissue and decreased O2 consumption in the dark phase. NNSs may be good sugar substitutes for people with hyperglycemia, but appear to influence energy metabolism in DIO mice. © 2013.

  10. Global Synchronization of Multiple Recurrent Neural Networks With Time Delays via Impulsive Interactions.

    PubMed

    Yang, Shaofu; Guo, Zhenyuan; Wang, Jun

    2017-07-01

    In this paper, new results on the global synchronization of multiple recurrent neural networks (NNs) with time delays via impulsive interactions are presented. Impulsive interaction means that a number of NNs communicate with each other at impulse instants only, while they are independent at the remaining time. The communication topology among NNs is not required to be always connected and can switch ON and OFF at different impulse instants. By using the concept of sequential connectivity and the properties of stochastic matrices, a set of sufficient conditions depending on time delays is derived to ascertain global synchronization of multiple continuous-time recurrent NNs. In addition, a counterpart on the global synchronization of multiple discrete-time NNs is also discussed. Finally, two examples are presented to illustrate the results.

  11. Nonnutritive sweetener consumption in humans: effects on appetite and food intake and their putative mechanisms123

    PubMed Central

    Mattes, Richard D; Popkin, Barry M

    2009-01-01

    Nonnutritive sweeteners (NNS) are ecologically novel chemosensory signaling compounds that influence ingestive processes and behavior. Only about 15% of the US population aged >2 y ingest NNS, but the incidence is increasing. These sweeteners have the potential to moderate sugar and energy intakes while maintaining diet palatability, but their use has increased in concert with BMI in the population. This association may be coincidental or causal, and either mode of directionality is plausible. A critical review of the literature suggests that the addition of NNS to non-energy-yielding products may heighten appetite, but this is not observed under the more common condition in which NNS is ingested in conjunction with other energy sources. Substitution of NNS for a nutritive sweetener generally elicits incomplete energy compensation, but evidence of long-term efficacy for weight management is not available. The addition of NNS to diets poses no benefit for weight loss or reduced weight gain without energy restriction. There are long-standing and recent concerns that inclusion of NNS in the diet promotes energy intake and contributes to obesity. Most of the purported mechanisms by which this occurs are not supported by the available evidence, although some warrant further consideration. Resolution of this important issue will require long-term randomized controlled trials. PMID:19056571

  12. Developing Sociolinguistic Competence through Intercultural Online Exchange

    ERIC Educational Resources Information Center

    Ritchie, Mathy

    2011-01-01

    The main goal of this study was to investigate whether computer-mediated communication (CMC) intercultural exchange offers the conditions necessary for the development of the sociolinguistic competence of second language learners. Non-native speakers (NNS) of French in British Columbia interacted through CMC with native speakers (NS) of French in…

  13. Intraoperative computed tomography with integrated navigation system in a multidisciplinary operating suite.

    PubMed

    Uhl, Eberhard; Zausinger, Stefan; Morhard, Dominik; Heigl, Thomas; Scheder, Benjamin; Rachinger, Walter; Schichor, Christian; Tonn, Jörg-Christian

    2009-05-01

    We report our preliminary experience in a prospective series of patients with regard to feasibility, work flow, and image quality using a multislice computed tomographic (CT) scanner combined with a frameless neuronavigation system (NNS). A sliding gantry 40-slice CT scanner was installed in a preexisting operating room. The scanner was connected to a frameless infrared-based NNS. Image data was transferred directly from the scanner into the navigation system. This allowed updating of the NNS during surgery by automated image registration based on the position of the gantry. Intraoperative CT angiography was possible. The patient was positioned on a radiolucent operating table that fits within the bore of the gantry. During image acquisition, the gantry moved over the patient. This table allowed all positions and movements like any normal operating table without compromising the positioning of the patient. For cranial surgery, a carbon-made radiolucent head clamp was fixed to the table. Experience with the first 230 patients confirms the feasibility of intraoperative CT scanning (136 patients with intracranial pathology, 94 patients with spinal lesions). After a specific work flow, interruption of surgery for intraoperative scanning can be limited to 10 to 15 minutes in cranial surgery and to 9 minutes in spinal surgery. Intraoperative imaging changed the course of surgery in 16 of the 230 cases either because control CT scans showed suboptimal screw position (17 of 307 screws, with 9 in 7 patients requiring correction) or that tumor resection was insufficient (9 cases). Intraoperative CT angiography has been performed in 7 cases so far with good image quality to determine residual flow in an aneurysm. Image quality was excellent in spinal and cranial base surgery. The system can be installed in a preexisting operating environment without the need for special surgical instruments. It increases the safety of the patient and the surgeon without necessitating a change in the existing surgical protocol and work flow. Imaging and updating of the NNS can be performed at any time during surgery with very limited time and modification of the surgical setup. Multidisciplinary use increases utilization of the system and thus improves the cost-efficiency relationship.

  14. Secular changes in intakes of foods among New Zealand adults from 1997 to 2008/09.

    PubMed

    Smith, Claire; Gray, Andrew R; Mainvil, Louise A; Fleming, Elizabeth A; Parnell, Winsome R

    2015-12-01

    To examine changes in the food choices of New Zealand (NZ) adults, between the 1997 National Nutrition Survey (NNS97) and the 2008/09 NZ Adult Nutrition Survey (2008/09 NZANS). The 2008/09 NZANS and the NNS97 were cross-sectional surveys of NZ adults (aged 15 years and over). Dietary intake data were collected using a computer-based 24 h diet recall. Logistic regression models were used to examine changes over time in the percentage reporting each food group, with survey year, sex and age group (19-30 years, 31-50 years, 51-70 years, ≥71 years) as the variables. NZ households. Adults aged 19 years and over (NNS97, n 4339; 2008/09 NZANS, n 3995). In the 2008/09 NZANS compared with NNS97, males and females were less likely to report consuming bread, potatoes, beef, vegetables, breakfast cereal, milk, cheese, butter, pies, biscuits, cakes and puddings, and sugar/confectionery (all P<0.001). In contrast, there was an increase in the percentage reporting rice and rice dishes (P<0.001), and among females a reported increase in snacks and snack bars (e.g., crisps, extruded snacks, muesli bars; P=0.007) and pasta and pasta dishes (P=0.017). Although food choices were associated with sex and age group, there were few differential changes between the surveys by sex or age group. For all age groups there was a shift in the percentage who reported consuming the traditional NZ foods, namely bread, beef, potatoes and vegetables, towards more rice and rice dishes. Declines in the consumption of butter, pies, biscuits, cakes and puddings are congruent with current dietary guidelines.

  15. Draft versus finished sequence data for DNA and protein diagnostic signature development

    PubMed Central

    Gardner, Shea N.; Lam, Marisa W.; Smith, Jason R.; Torres, Clinton L.; Slezak, Tom R.

    2005-01-01

    Sequencing pathogen genomes is costly, demanding careful allocation of limited sequencing resources. We built a computational Sequencing Analysis Pipeline (SAP) to guide decisions regarding the amount of genomic sequencing necessary to develop high-quality diagnostic DNA and protein signatures. SAP uses simulations to estimate the number of target genomes and close phylogenetic relatives (near neighbors or NNs) to sequence. We use SAP to assess whether draft data are sufficient or finished sequencing is required using Marburg and variola virus sequences. Simulations indicate that intermediate to high-quality draft with error rates of 10−3–10−5 (∼8× coverage) of target organisms is suitable for DNA signature prediction. Low-quality draft with error rates of ∼1% (3× to 6× coverage) of target isolates is inadequate for DNA signature prediction, although low-quality draft of NNs is sufficient, as long as the target genomes are of high quality. For protein signature prediction, sequencing errors in target genomes substantially reduce the detection of amino acid sequence conservation, even if the draft is of high quality. In summary, high-quality draft of target and low-quality draft of NNs appears to be a cost-effective investment for DNA signature prediction, but may lead to underestimation of predicted protein signatures. PMID:16243783

  16. WNN 92; Proceedings of the 3rd Workshop on Neural Networks: Academic/Industrial/NASA/Defense, Auburn Univ., AL, Feb. 10-12, 1992 and South Shore Harbour, TX, Nov. 4-6, 1992

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L. (Editor)

    1993-01-01

    The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.

  17. Pacifier Stiffness Alters the Dynamics of the Suck Central Pattern Generator.

    PubMed

    Zimmerman, Emily; Barlow, Steven M

    2008-06-01

    Variation in pacifier stiffness on non-nutritive suck (NNS) dynamics was examined among infants born prematurely with a history of respiratory distress syndrome. Three types of silicone pacifiers used in the NICU were tested for stiffness, revealing the Super Soothie™ nipple is 7 times stiffer than the Wee™ or Soothie™ pacifiers even though shape and displaced volume are identical. Suck dynamics among 20 preterm infants were subsequently sampled using the Soothie™ and Super Soothie™ pacifiers during follow-up at approximately 3 months of age. ANOVA revealed significant differences in NNS cycles/min, NNS amplitude, NNS cycles/burst, and NNS cycle periods as a function of pacifier stiffness. Infants modify the spatiotemporal output of their suck central pattern generator when presented with pacifiers with significantly different mechanical properties. Infants show a non-preference to suck due to high stiffness in the selected pacifier. Therefore, excessive pacifier stiffness may decrease ororhythmic patterning and impact feeding outcomes.

  18. Pacifier Stiffness Alters the Dynamics of the Suck Central Pattern Generator

    PubMed Central

    Zimmerman, Emily; Barlow, Steven M.

    2008-01-01

    Variation in pacifier stiffness on non-nutritive suck (NNS) dynamics was examined among infants born prematurely with a history of respiratory distress syndrome. Three types of silicone pacifiers used in the NICU were tested for stiffness, revealing the Super Soothie™ nipple is 7 times stiffer than the Wee™ or Soothie™ pacifiers even though shape and displaced volume are identical. Suck dynamics among 20 preterm infants were subsequently sampled using the Soothie™ and Super Soothie™ pacifiers during follow-up at approximately 3 months of age. ANOVA revealed significant differences in NNS cycles/min, NNS amplitude, NNS cycles/burst, and NNS cycle periods as a function of pacifier stiffness. Infants modify the spatiotemporal output of their suck central pattern generator when presented with pacifiers with significantly different mechanical properties. Infants show a non-preference to suck due to high stiffness in the selected pacifier. Therefore, excessive pacifier stiffness may decrease ororhythmic patterning and impact feeding outcomes. PMID:19492006

  19. Planning Tool for Strategic Evaluation of Facility Plans - 13570

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magoulas, Virginia; Cercy, Michael; Hall, Irin

    2013-07-01

    Savannah River National Laboratory (SRNL) has developed a strategic planning tool for the evaluation of the utilization of its unique resources for processing and research and development of nuclear materials. The Planning Tool is a strategic level tool for assessing multiple missions that could be conducted utilizing the SRNL facilities and showcasing the plan. Traditional approaches using standard scheduling tools and laying out a strategy on paper tended to be labor intensive and offered either a limited or cluttered view for visualizing and communicating results. A tool that can assess the process throughput, duration, and utilization of the facility wasmore » needed. SRNL teamed with Newport News Shipbuilding (NNS), a division of Huntington Ingalls Industries, to create the next generation Planning Tool. The goal of this collaboration was to create a simulation based tool that allows for quick evaluation of strategies with respect to new or changing missions, and clearly communicates results to the decision makers. This tool has been built upon a mature modeling and simulation software previously developed by NNS. The Planning Tool provides a forum for capturing dependencies, constraints, activity flows, and variable factors. It is also a platform for quickly evaluating multiple mission scenarios, dynamically adding/updating scenarios, generating multiple views for evaluating/communicating results, and understanding where there are areas of risks and opportunities with respect to capacity. The Planning Tool that has been developed is useful in that it presents a clear visual plan for the missions at the Savannah River Site (SRS). It not only assists in communicating the plans to SRS corporate management, but also allows the area stakeholders a visual look at the future plans for SRS. The design of this tool makes it easily deployable to other facility and mission planning endeavors. (authors)« less

  20. Frequency Modulation and Spatiotemporal Stability of the sCPG in Preterm Infants with RDS

    PubMed Central

    Barlow, Steven M.; Burch, Mimi; Venkatesan, Lalit; Harold, Meredith; Zimmerman, Emily

    2012-01-01

    The nonnutritive suck (NNS) is an observable and accessible motor behavior which is often used to make inference about brain development and pre-feeding skill in preterm and term infants. The purpose of this study was to model NNS burst compression pressure dynamics in the frequency and time domain among two groups of preterm infants, including those with respiratory distress syndrome (RDS, N = 15) and 17 healthy controls. Digitized samples of NNS compression pressure waveforms recorded at a 1-week interval were collected 15 minutes prior to a scheduled feed. Regression analysis and ANOVA revealed that healthy preterm infants produced longer NNS bursts and the mean burst initiation cycle frequencies were higher when compared to the RDS group. Moreover, the initial 5 cycles of the NNS burst manifest a frequency modulated (FM) segment which is a significant feature of the suck central pattern generator (sCPG), and differentially expressed in healthy and RDS infants. The NNS burst structure revealed significantly lower spatiotemporal index values for control versus RDS preterm infants during FM, and provides additional information on the microstructure of the sCPG which may be used to gauge the developmental status and progression of oromotor control systems among these fragile infants. PMID:22888359

  1. Frequency-Modulated Orocutaneous Stimulation Promotes Non-nutritive Suck Development in Preterm Infants with Respiratory Distress Syndrome or Chronic Lung Disease

    PubMed Central

    Barlow, Steven M; Lee, Jaehoon; Wang, Jingyan; Oder, Austin; Hall, Sue; Knox, Kendi; Weatherstone, Kathleen; Thompson, Diane

    2013-01-01

    Background For the premature infant, extrauterine life is a pathological condition which greatly amplifies the challenges to the brain in establishing functional oromotor behaviors. The extent to which suck can be entrained using a synthetically patterned orocutaneous input to promote its development in preterm infants who manifest chronic lung disease is unknown. Objective To evaluate the effects of a frequency-modulated orocutaneous pulse train delivered through a pneumatically-charged pacifier capable of enhancing non-nutritive suck (NNS) activity in tube-fed premature infants. Methods A randomized trial to evaluate the efficacy of pneumatic orocutaneous stimulation 3x/day on NNS development and length of stay (LOS) in the NICU among 160 newborn infants distributed among 3 subpopulations, including healthy preterm infants (HI), respiratory distress syndrome (RDS), and chronic lung disease (CLD). Study infants received a regimen of orocutaneous pulse trains through a PULSED pressurized silicone pacifier or a SHAM control (blind pacifier) during gavage feeds for up to 10 days. Results Mixed modeling, adjusted for the infant’s gender, gestational age, postmenstrual age, and birth weight, was used to handle interdependency among repeated measures within subjects. A significant main effect for stimulation mode (SHAM pacifier vs PULSED orosensory) was found among preterm infants for NNS Bursts/minute (p=.003), NNS events/minute (p=.033), and for Total Oral Compressions/minute [NNS+nonNNS] (p=.016). Pairwise comparison of adjusted means using Bonferroni adjustment indicated RDS and CLD infants showed the most significant gains on these NNS performance indices. CLD infants in the treatment group showed significantly shorter LOS by an average of 2.5 days. Conclusion Frequency-modulated PULSED orocutaneous pulse train stimuli delivered through a silicone pacifier are effective in facilitating NNS burst development in tube-fed RDS and CLD preterm infants, with an added benefit of reduced LOS for CLD infants. PMID:24310444

  2. Frequency-modulated orocutaneous stimulation promotes non-nutritive suck development in preterm infants with respiratory distress syndrome or chronic lung disease.

    PubMed

    Barlow, S M; Lee, J; Wang, J; Oder, A; Hall, S; Knox, K; Weatherstone, K; Thompson, D

    2014-02-01

    For the premature infant, extrauterine life is a pathological condition, which greatly amplifies the challenges to the brain in establishing functional oromotor behaviors. The extent to which suck can be entrained using a synthetically patterned orocutaneous input to promote its development in preterm infants who manifest chronic lung disease (CLD) is unknown. The objective of this study was to evaluate the effects of a frequency-modulated (FM) orocutaneous pulse train delivered through a pneumatically charged pacifier capable of enhancing non-nutritive suck (NNS) activity in tube-fed premature infants. A randomized trial to evaluate the efficacy of pneumatic orocutaneous stimulation 3 × per day on NNS development and length of stay (LOS) in the neonatal intensive care unit among 160 newborn infants distributed among three sub-populations, including healthy preterm infants, respiratory distress syndrome (RDS) and CLD. Study infants received a regimen of orocutaneous pulse trains through a PULSED pressurized silicone pacifier or a SHAM control (blind pacifier) during gavage feeds for up to 10 days. Mixed modeling, adjusted for the infant's gender, gestational age, postmenstrual age and birth weight, was used to handle interdependency among repeated measures within subjects. A significant main effect for stimulation mode (SHAM pacifier vs PULSED orosensory) was found among preterm infants for NNS bursts per min (P=0.003), NNS events per min (P=0.033) and for total oral compressions per min (NNS+nonNNS) (P=0.016). Pairwise comparison of adjusted means using Bonferroni adjustment indicated RDS and CLD infants showed the most significant gains on these NNS performance indices. CLD infants in the treatment group showed significantly shorter LOS by an average of 2.5 days. FM PULSED orocutaneous pulse train stimuli delivered through a silicone pacifier are effective in facilitating NNS burst development in tube-fed RDS and CLD preterm infants, with an added benefit of reduced LOS for CLD infants.

  3. The Impact of Caloric and Non-Caloric Sweeteners on Food Intake and Brain Responses to Food: A Randomized Crossover Controlled Trial in Healthy Humans.

    PubMed

    Crézé, Camille; Candal, Laura; Cros, Jérémy; Knebel, Jean-François; Seyssel, Kevin; Stefanoni, Nathalie; Schneiter, Philippe; Murray, Micah M; Tappy, Luc; Toepel, Ulrike

    2018-05-15

    Whether non-nutritive sweetener (NNS) consumption impacts food intake behavior in humans is still unclear. Discrepant sensory and metabolic signals are proposed to mislead brain regulatory centers, in turn promoting maladaptive food choices favoring weight gain. We aimed to assess whether ingestion of sucrose- and NNS-sweetened drinks would differently alter brain responses to food viewing and food intake. Eighteen normal-weight men were studied in a fasted condition and after consumption of a standardized meal accompanied by either a NNS-sweetened (NNS), or a sucrose-sweetened (SUC) drink, or water (WAT). Their brain responses to visual food cues were assessed by means of electroencephalography (EEG) before and 45 min after meal ingestion. Four hours after meal ingestion, spontaneous food intake was monitored during an ad libitum buffet. With WAT, meal intake led to increased neural activity in the dorsal prefrontal cortex and the insula, areas linked to cognitive control and interoception. With SUC, neural activity in the insula increased as well, but decreased in temporal regions linked to food categorization, and remained unchanged in dorsal prefrontal areas. The latter modulations were associated with a significantly lower total energy intake at buffet (mean kcal ± SEM; 791 ± 62) as compared to WAT (942 ± 71) and NNS (917 ± 70). In contrast to WAT and SUC, NNS consumption did not impact activity in the insula, but led to increased neural activity in ventrolateral prefrontal regions linked to the inhibition of reward. Total energy intake at the buffet was not significantly different between WAT and NNS. Our findings highlight the differential impact of caloric and non-caloric sweeteners on subsequent brain responses to visual food cues and energy intake. These variations may reflect an initial stage of adaptation to taste-calorie uncoupling, and could be indicative of longer-term consequences of repeated NNS consumption on food intake behavior.

  4. Effects of one-night sleep deprivation on selective attention and isometric force in adolescent karate athletes.

    PubMed

    Ben Cheikh, Ridha; Latiri, Imed; Dogui, Mohamed; Ben Saad, Helmi

    2017-06-01

    Most of the available literature related to aspects of sleep deprivation is primarily focused on memory and learning, and studies regarding its effects on selective attention and/or physical performance are scarce. Moreover, the available literature includes general population or people involved in team sports (e.g. volleyball). However, only few studies were performed on athletes involved in combat sports (e.g. karate). The aim of the present study was to determine the effects of a total one-night sleep deprivation (1NSD) on activation and inhibition processes of selective attention and on maximal isometric force in karate athletes. Twelve young karate athletes (mean age 16.9±0.8 years) were enrolled. The protocol consists of two successive sessions: a normal night's sleep (NNS) and a total 1NSD. After each night, athletes performed selective attention and muscle strength tests during the same following three times (T) of the day: T1NNS or T11NSD: 8-9 a.m.; T2NNS or T21NSD: 12 a.m.-1 p.m.; T3NNS or T31NSD: 4-5 p.m. Activation (simple [SRT] and choice reaction times [CRT]) and inhibition (negative priming) processes were evaluated using Superlab v. 4.5 software (Cedrus Corporation, San Pedro, CA, USA). Maximal force and maximal force time (MFT) of brachial biceps isometric contraction were evaluated (Ergo System®, Globus, Codognè, Italy). A non-parametric test was used to evaluate the sessions (NNS vs. SND for the same time period) and time (T1NNS vs. 1NSD) effects. All athletes completed all tests after a NNS. Twelve, eleven and four athletes completed all tests at T11NSD, T21NSD and T31NSD, respectively. As for sessions effects, no statistically significant difference was found. As for time effects, a significant increase in SRT at T21NSD vs. T1NNS (345±47 vs. 317±33 ms, respectively), a significant increase in MFT at T21NSD vs. T1NNS (2172±260 vs.1885±292 ms, respectively), and no significant changes in CRT and negative priming reaction time or MFT data were observed. 1NSD affects both activation processes of selective attention and maximal isometric strength, two key skills in combat sports.

  5. Intraoperative computed tomography with an integrated navigation system in stabilization surgery for complex craniovertebral junction malformation.

    PubMed

    Yu, Xinguang; Li, Lianfeng; Wang, Peng; Yin, Yiheng; Bu, Bo; Zhou, Dingbiao

    2014-07-01

    This study was designed to report our preliminary experience with stabilization procedures for complex craniovertebral junction malformation (CVJM) using intraoperative computed tomography (iCT) with an integrated neuronavigation system (NNS). To evaluate the workflow, feasibility and clinical outcome of stabilization procedures using iCT image-guided navigation for complex CVJM. The stabilization procedures in CVJM are complex because of the area's intricate geometry and bony structures, its critical relationship to neurovascular structures and the intricate biomechanical issues involved. A sliding gantry 40-slice computed tomography scanner was installed in a preexisting operating room. The images were transferred directly from the scanner to the NNS using an automated registration system. On the basis of the analysis of intraoperative computed tomographic images, 23 cases (11 males, 12 females) with complicated CVJM underwent navigated stabilization procedures to allow more control over screw placement. The age of these patients were 19-52 years (mean: 33.5 y). We performed C1-C2 transarticular screw fixation in 6 patients to produce atlantoaxial arthrodesis with better reliability. Because of a high-riding transverse foramen on at least 1 side of the C2 vertebra and an anomalous vertebral artery position, 7 patients underwent C1 lateral mass and C2 pedicle screw fixation. Ten additional patients were treated with individualized occipitocervical fixation surgery from the hypoplasia of C1 or constraints due to C2 bone structure. In total, 108 screws were inserted into 23 patients using navigational assistance. The screws comprised 20 C1 lateral mass screws, 26 C2, 14 C3, or 4 C4 pedicle screws, 32 occipital screws, and 12 C1-C2 transarticular screws. There were no vascular or neural complications except for pedicle perforations that were detected in 2 (1.9%) patients and were corrected intraoperatively without any persistent nerves or vessel damage. The overall accuracy of the image guidance system was 98.1%. The duration of interruption during the surgical process for the iCT was 8±1.5 minutes. All patients were clinically evaluated using Nurick grade criteria and for neurological deficits 3 months after surgery. Twenty-one patients (91.3%) improved by at least 1 Nurick grade, whereas the grade remained unchanged in 2 (8.7%) patients. Craniovertebral stability and solid bone fusion was achieved in all patients. NNS was found to correlate well with the intraoperative findings, and the recalibration was uneventful in all cases and had an accuracy of 1.8 mm (range, 0.6-2.2 mm). iCT scanning with integrated NNS was found to be both feasible and beneficial in the stabilization procedures for complex CVJM. In this unusual patient population, the technique seemed to be of value for negotiating complex anatomy and for achieving more control over screw placement.

  6. [Consumption of carbonated beverages with nonnutritive sweeteners in Latin American university students].

    PubMed

    Durán Agüero, Samuel; Record Cornwall, Jiniva; Encina Vega, Claudia; Salazar de Ariza, Julieta; Cordón Arrivillaga, Karla; Cereceda Bujaico, María del Pilar; Antezana Alzamora, Sonia; Espinoza Bernardo, Sissy

    2014-09-12

    Consumption of carbonated beverages with nonnutritive sweeteners (NNS) is increasingly common in order to maintain a healthy weight, but the effect of NNS on body weight is controversial. University students (n=1,229) of both sexes aged 18 to 26, of which 472 were from Chile, 300 of Panama, 253 from Guatemala and 204 of Peru. Each student was applied a frequency survey of weekly food consumption supported by photographs of beverages with NNS from each country to determine the intake of them. Also they underwent anthropometric measurements. 80% of these students consumed carbonated beverages with NNS, none of them exceeded the acceptable daily intake for sucralose, potassium acesulfame and aspartame. Increased consumption in both men and women was observed in chilean students (p. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  7. The role of healed N-vacancy defective BC2N sheet and nanotube by NO molecule in oxidation of NO and CO gas molecules

    NASA Astrophysics Data System (ADS)

    Nematollahi, Parisa; Esrafili, Mehdi D.; Neyts, Erik C.

    2018-06-01

    In this study, the healing of N-vacancy boron carbonitride nanosheet (NV-BC2NNS) and nanotube (NV-BC2NNT) by NO molecule is studied by means of density functional theory calculations. Two different N-vacancies are considered in each of these structures in which the vacancy site is surrounded by either three B-atoms (NB) or by two B- and one C-atom (NBC). By means of the healed BC2NNS and BC2NNT as a support, the removal of two toxic gas molecules (NO and CO) are applicable. It should be noted that the obtained energy barriers of both healing and oxidizing processes are significantly lower than those of graphene, carbon nanotubes or boron nitride nanostructures. Also, at the end of the oxidation process, the pure BC2NNS or BC2NNT is obtained without any additional defects. Therefore, by using this method, we can considerably purify the defective BC2NNS/BC2NNT. Moreover, according to the thermochemistry calculations we can further confirm that the healing process of the NV-BC2NNS and NV-BC2NNT by NO are feasible at room temperature. So, we can claim that this study could be very helpful in both purifying the defective BC2NNS/BC2NNT while in the same effort removing toxic NO and CO gases.

  8. Developing Collaborative Autonomous Learning Abilities in Computer Mediated Language Learning: Attention to Meaning among Students in Wiki Space

    ERIC Educational Resources Information Center

    Kessler, Greg; Bikowski, Dawn

    2010-01-01

    This study reports on attention to meaning among 40 NNS pre-service EFL teachers as they collaboratively constructed a wiki in a 16-week online course. Focus is placed upon the nature of individual and group behavior when attending to meaning in a long-term wiki-based collaborative activity as well as the students' collaborative autonomous…

  9. Managing Discourse in Intercultural Business Email Interactions: A Case Study of a British and Italian Business Transaction

    ERIC Educational Resources Information Center

    Incelli, Ersilia

    2013-01-01

    This paper investigates native speaker (NS) and non-native speaker (NNS) interaction in the workplace in computer-mediated communication (CMC). Based on empirical data from a 10-month email exchange between a medium-sized British company and a small-sized Italian company, the general aim of this study is to explore the nature of the intercultural…

  10. Non-nutritive sweeteners: evidence for benefit vs. risk.

    PubMed

    Gardner, Christopher

    2014-02-01

    Intake of added sugars in the American diet is high and has been linked to weight gain and adverse effects on glycemic control and diabetes. Several national health organizations recommend decreasing added sugars intake. Among the many strategies to consider to achieve this reduction is substitution with non-nutritive sweeteners (NNS - artificial sweeteners and stevia). The purpose of this review is to critically examine existing evidence for this strategy. Short-term intervention studies suggest that NNS, when substituted for added sugars, may be useful in supporting energy intake reduction, and promoting glycemic control and weight management. However, the magnitude of effect in these studies has ranged from modest to null. Compensatory eating behaviors likely diminish, and in some cases negate, potential effects. Findings from longer-term observational studies that examine associations between NNS use and obesity or type 2 diabetes are potentially confounded by reverse causality. Existing data are insufficient to clearly support or refute the effectiveness of substitution with NNS as a means of reducing added sugar intake. It is important to not lose sight of the impact of incorporating NNS-containing beverages and foods on overall diet quality when assessing potential health benefits vs. risks.

  11. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  12. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    PubMed Central

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830

  13. Effects of aspartame-, monk fruit-, stevia- and sucrose-sweetened beverages on postprandial glucose, insulin and energy intake.

    PubMed

    Tey, S L; Salleh, N B; Henry, J; Forde, C G

    2017-03-01

    Substituting sweeteners with non-nutritive sweeteners (NNS) may aid in glycaemic control and body weight management. Limited studies have investigated energy compensation, glycaemic and insulinaemic responses to artificial and natural NNS. This study compared the effects of consuming NNS (artificial versus natural) and sucrose (65 g) on energy intake, blood glucose and insulin responses. Thirty healthy male subjects took part in this randomised, crossover study with four treatments: aspartame-, monk fruit-, stevia- and sucrose-sweetened beverages. On each test day, participants were asked to consume a standardised breakfast in the morning, and they were provided with test beverage as a preload in mid-morning and ad libitum lunch was provided an hour after test beverage consumption. Blood glucose and insulin concentrations were measured every 15 min within the first hour of preload consumption and every 30 min for the subsequent 2 h. Participants left the study site 3 h after preload consumption and completed a food diary for the rest of the day. Ad libitum lunch intake was significantly higher for the NNS treatments compared with sucrose (P=0.010). The energy 'saved' from replacing sucrose with NNS was fully compensated for at subsequent meals; hence, no difference in total daily energy intake was found between the treatments (P=0.831). The sucrose-sweetened beverage led to large spikes in blood glucose and insulin responses within the first hour, whereas these responses were higher for all three NNS beverages following the test lunch. Thus, there were no differences in total area under the curve (AUC) for glucose (P=0.960) and insulin (P=0.216) over 3 h between the four test beverages. The consumption of calorie-free beverages sweetened with artificial and natural NNS have minimal influences on total daily energy intake, postprandial glucose and insulin compared with a sucrose-sweetened beverage.

  14. Psychophysical Evaluation of Sweetness Functions Across Multiple Sweeteners

    PubMed Central

    Low, Julia Y.Q.; McBride, Robert L.; Lacy, Kathleen E.

    2017-01-01

    Sweetness is one of the 5 prototypical tastes and is activated by sugars and non-nutritive sweeteners (NNS). The aim of this study was to investigate measures of sweet taste function [detection threshold (DT), recognition threshold (RT), and suprathreshold intensity ratings] across multiple sweeteners. Sixty participants, 18–52 years of age (mean age in years = 26, SD = ±7.8), were recruited to participate in the study. DT and RT were collected for caloric sweeteners (glucose, fructose, sucrose, erythritol) and NNS (sucralose, rebaudioside A). Sweetness intensity for all sweeteners was measured using a general Labeled Magnitude Scale. There were strong correlations between DT and RT of all 4 caloric sweeteners across people (r = 0.62–0.90, P < 0.001), and moderate correlations between DT and RT for both of the NNS (r = 0.39–0.48, P < 0.05); however, weaker correlations were observed between the DT or RT of the caloric sweeteners and NNS (r = 0.26–0.48, P < 0.05). The DT and RT of glucose and fructose were not correlated with DT or RT of sucralose (P > 0.05). In contrast, there were strong correlations between the sweetness intensity ratings of all sweeteners (r = 0.70–0.96, P < 0.001). This suggests those caloric sweeteners and NNS access at least partially independent mechanisms with respect to DT and RT measures. At suprathreshold level, however, the strong correlation between caloric sweeteners and NNS through weak, moderate, and strong intensity indicates a commonality in sweet taste mechanism for the perceived intensity range. PMID:27765786

  15. morphogen: Translation into Morphologically Rich Languages with Synthetic Phrases

    DTIC Science & Technology

    2013-10-01

    specific trans - lation phrases. These “synthetic phrases” augment the standard translation grammars and decoding proceeds normally with a standard...Genitive case grandparent(poss) Hebrew Suffix ים ( masculine plural) parent=NNS after=NNS Prefix א (first person sing. + future) child(nsubj)=I child(aux

  16. Invasibility of Mediterranean-Climate Rivers by Non-Native Fish: The Importance of Environmental Drivers and Human Pressures

    PubMed Central

    Ilhéu, Maria; Matono, Paula; Bernardo, João Manuel

    2014-01-01

    Invasive species are regarded as a biological pressure to natural aquatic communities. Understanding the factors promoting successful invasions is of great conceptual and practical importance. From a practical point of view, it should help to prevent future invasions and to mitigate the effects of recent invaders through early detection and prioritization of management measures. This study aims to identify the environmental determinants of fish invasions in Mediterranean-climate rivers and evaluate the relative importance of natural and human drivers. Fish communities were sampled in 182 undisturbed and 198 disturbed sites by human activities, belonging to 12 river types defined for continental Portugal within the implementation of the European Union's Water Framework Directive. Pumpkinseed sunfish, Lepomis gibbosus (L.), and mosquitofish, Gambusia holbrooki (Girard), were the most abundant non-native species (NNS) in the southern river types whereas the Iberian gudgeon, Gobio lozanoi Doadrio and Madeira, was the dominant NNS in the north/centre. Small northern mountain streams showed null or low frequency of occurrence and abundance of NNS, while southern lowland river types with medium and large drainage areas presented the highest values. The occurrence of NNS was significantly lower in undisturbed sites and the highest density of NNS was associated with high human pressure. Results from variance partitioning showed that natural environmental factors determine the distribution of the most abundant NNS while the increase in their abundance and success is explained mainly by human-induced disturbance factors. This study stresses the high vulnerability of the warm water lowland river types to non-native fish invasions, which is amplified by human-induced degradation. PMID:25372284

  17. Trend time-series modeling and forecasting with neural networks.

    PubMed

    Qi, Min; Zhang, G Peter

    2008-05-01

    Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.

  18. A recurrent neural network for solving bilevel linear programming problem.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  19. Improving Language Production Using Subtitled Similar Task Videos

    ERIC Educational Resources Information Center

    Arslanyilmaz, Abdurrahman; Pedersen, Susan

    2010-01-01

    This study examines the effects of subtitled similar task videos on language production by nonnative speakers (NNSs) in an online task-based language learning (TBLL) environment. Ten NNS-NNS dyads collaboratively completed four communicative tasks, using an online TBLL environment specifically designed for this study and a chat tool in…

  20. Aetiology of Neonatal Septicaemia in Qatif, Saudi Arabia.

    ERIC Educational Resources Information Center

    Elbashier, Ali M.; And Others

    1994-01-01

    Of the 1,797 babies admitted to a hospital in Saudi Arabia over a 3-year period, 8% were documented as having NNS. Identified several gram-positive bacteria, several gram-negative bacteria, and candida albicans as etiological agents in the cases of NNS. Determined the antibiotic susceptibility of the bacteria. (BC)

  1. Why Do International Students Avoid Communicating with Americans?

    ERIC Educational Resources Information Center

    Wang, I-Ching; Ahn, Janet N.; Kim, Hyojin J.; Lin-Siegler, Xiaodong

    2017-01-01

    We explore how the communication concerns of non-native English speakers (NNS) and Americans relate to their perceptions of each other and decisions to interact. NNS identified their concerns in communicating with Americans, the perceived causes of their concerns, and the strategies they would employ to address these concerns. Americans noted…

  2. Predicting β-Turns in Protein Using Kernel Logistic Regression

    PubMed Central

    Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, FangXiang; Li, Min

    2013-01-01

    A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case. PMID:23509793

  3. Predicting β-turns in protein using kernel logistic regression.

    PubMed

    Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, Fangxiang; Li, Min

    2013-01-01

    A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case.

  4. Taiwanese University Students' Attitudes to Non-Native Speakers English Teachers

    ERIC Educational Resources Information Center

    Chang, Feng-Ru

    2016-01-01

    Numerous studies have been conducted to explore issues surrounding non-native speakers (NNS) English teachers and native speaker (NS) teachers which concern, among others, the comparison between the two, the self-perceptions of NNS English teachers and the effectiveness of their teaching, and the students' opinions on and attitudes towards them.…

  5. The English Language Fellows Program.

    ERIC Educational Resources Information Center

    Blakely, Richard

    1995-01-01

    Discusses the English Language Fellows (ELF) Program, a pilot project that pairs specially-trained, native-speaking undergraduates with nonnative-speaking (NNS) classmates to study the content of courses that both are taking together. Woven into that study of course content, for the benefit of the NNS students, is the study of language as it is…

  6. The Development and Validation of a Novice Nurse Decision-Making Skills Education Curriculum

    ERIC Educational Resources Information Center

    Simmons, Joanne

    2017-01-01

    Novice nurses (NNs) are entering critical care environments with limited knowledge, skills, and decision-making expertise. They are expected to care for complex patients in a dynamic healthcare setting. The research question for this project examined whether NNs improve their knowledge and skills by participating in a nursing decision-making…

  7. The exosome component Rrp6 is required for RNA polymerase II termination at specific targets of the Nrd1-Nab3 pathway.

    PubMed

    Fox, Melanie J; Gao, Hongyu; Smith-Kinnaman, Whitney R; Liu, Yunlong; Mosley, Amber L

    2015-01-01

    The exosome and its nuclear specific subunit Rrp6 form a 3'-5' exonuclease complex that regulates diverse aspects of RNA biology including 3' end processing and degradation of a variety of noncoding RNAs (ncRNAs) and unstable transcripts. Known targets of the nuclear exosome include short (<1000 bp) RNAPII transcripts such as small noncoding RNAs (snRNAs), cryptic unstable transcripts (CUTs), and some stable unannotated transcripts (SUTs) that are terminated by an Nrd1, Nab3, and Sen1 (NNS) dependent mechanism. NNS-dependent termination is coupled to RNA 3' end processing and/or degradation by the Rrp6/exosome in yeast. Recent work suggests Nrd1 is necessary for transcriptome surveillance, regulating promoter directionality and suppressing antisense transcription independently of, or prior to, Rrp6 activity. It remains unclear whether Rrp6 is directly involved in termination; however, Rrp6 has been implicated in the 3' end processing and degradation of ncRNA transcripts including CUTs. To determine the role of Rrp6 in NNS termination globally, we performed RNA sequencing (RNA-Seq) on total RNA and perform ChIP-exo analysis of RNA Polymerase II (RNAPII) localization. Deletion of RRP6 promotes hyper-elongation of multiple NNS-dependent transcripts resulting from both improperly processed 3' RNA ends and faulty transcript termination at specific target genes. The defects in RNAPII termination cause transcriptome-wide changes in mRNA expression through transcription interference and/or antisense repression, similar to previously reported effects of depleting Nrd1 from the nucleus. Elongated transcripts were identified within all classes of known NNS targets with the largest changes in transcription termination occurring at CUTs. Interestingly, the extended transcripts that we have detected in our studies show remarkable similarity to Nrd1-unterminated transcripts at many locations, suggesting that Rrp6 acts with the NNS complex globally to promote transcription termination in addition to 3' end RNA processing and/or degradation at specific targets.

  8. EffectS of non-nutritive sWeetened beverages on appetITe during aCtive weigHt loss (SWITCH): Protocol for a randomized, controlled trial assessing the effects of non-nutritive sweetened beverages compared to water during a 12-week weight loss period and a follow up weight maintenance period.

    PubMed

    Masic, U; Harrold, J A; Christiansen, P; Cuthbertson, D J; Hardman, C A; Robinson, E; Halford, J C G

    2017-02-01

    Acute and medium-term intervention studies suggest that non-nutritive sweeteners (NNS) are beneficial for weight loss, however there is limited human data on the long-term effects of consuming NNS on weight loss, maintenance, and appetite. Further research is therefore required to elucidate the prolonged impact of NNS consumption on these outcome measures. A randomized parallel groups design will be used to assess whether regular NNS beverage intake is equivalent to a water control in promoting weight loss over 12-weeks (weekly weight loss sessions; Phase I), then supporting weight maintenance over 40-weeks (monthly sessions; Phase II) and subsequently independent weight maintenance over 52-weeks (Phase III) in 432 participants. A subset of these participants (n=116) will complete laboratory-based appetite probe days (15 sessions; 3 sessions each at baseline, at the start of phase I and the end of each phase). A separate subset (n=50) will complete body composition scans (DXA) at baseline and at the end of each phase. All participants will regularly be weighed and will complete questionnaires and cognitive tasks to assess changes in body weight and appetitive behaviours. Measures of physical activity and biochemical markers will also be taken. The trial will assess the efficacy of NNS beverages compared to water during a behavioural weight loss and maintenance programme. We aim to understand whether the impact of NNS on weight, dietary adherence and well-being are beneficial or transient and effects on prolonged successful weight loss and weight maintenance through sustained changes in appetite and eating behaviour. Clinical Trials: NCT02591134; registered: 23.10.2015. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  9. Expression of Epistemic Stance in EFL Chinese University Students' Writing

    ERIC Educational Resources Information Center

    Chen, Zhenzhen

    2012-01-01

    This paper reported findings on a contrastive analysis of epistemic expressions in argumentative essays between NS and NNS Chinese L2 writers. Based on an examination of a NS corpus and a NNS learner corpus across four proficiency levels, the study shows there is great similarity in the total number of epistemic devices used per thousand words…

  10. Student-Initiated Attention to Form in Wiki-Based Collaborative Writing

    ERIC Educational Resources Information Center

    Kessler, Greg

    2009-01-01

    This study reports on student initiated attention to form within the collaborative construction of a wiki among pre-service Non-Native Speaker (NNS) English teachers. Forty NNS pre-service teachers from a large Mexican university were observed over a period of a sixteen week semester in an online content-based course aimed at improving their…

  11. An Online Task-Based Language Learning Environment: Is It Better for Advanced- or Intermediate-Level Second Language Learners?

    ERIC Educational Resources Information Center

    Arslanyilmaz, Abdurrahman

    2012-01-01

    This study investigates the relationship of language proficiency to language production and negotiation of meaning that non-native speakers (NNSs) produced in an online task-based language learning (TBLL) environment. Fourteen NNS-NNS dyads collaboratively completed four communicative tasks, using an online TBLL environment specifically designed…

  12. Modular neural networks: a survey.

    PubMed

    Auda, G; Kamel, M

    1999-04-01

    Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented.

  13. The physical examination content of the Japanese National Health and Nutrition Survey: temporal changes.

    PubMed

    Tanaka, Hisako; Imai, Shino; Nakade, Makiko; Imai, Eri; Takimoto, Hidemi

    2016-12-01

    Survey items of the Japan National Nutrition Survey (J-NNS) have changed over time. Several papers on dietary surveys have been published; however, to date, there are no in-depth papers regarding physical examinations. Therefore, we investigated changes in the survey items in the physical examinations performed in the J-NNS and the National Health and Nutrition Survey (NHNS), with the aim of incorporating useful data for future policy decisions. We summarized the description of physical examinations and marshalled the changes of survey items from the J-NNS and NHNS from 1946 to 2012. The physical examination is roughly classified into the following six components: some are relevant to anthropometric measurements, clinical measurements, physical symptoms, blood tests, lifestyle and medication by interview, and others. Items related to nutritional deficiency, such as anaemia and tendon reflex disappearance, and body weight measurements were collected during the early period, according to the instructions of the General Headquarters. From 1989, blood tests and measurement of physical activity were added, and serum total protein, total cholesterol, triglycerides, HDL-cholesterol, blood glucose, red blood corpuscles and haemoglobin measurements have been performed continuously for more than 20 years. This is the first report on the items of physical examination in the J-NNS and NHNS. Our research results provide basic information for the utilization of the J-NNS and NHNS, to researchers, clinicians or policy makers. Monitoring the current state correctly is essential for national health promotion, and also for improvement of the investigation methods to apply country-by-country comparisons.

  14. "Searching for an Entrance" and Finding a Two-Way Door: Using Poetry to Create East-West Contact Zones in TESOL Teacher Education

    ERIC Educational Resources Information Center

    Cahnmann-Taylor, Melisa; Zhang, Kuo; Bleyle, Susan Jean; Hwang, Yohan

    2015-01-01

    Discrimination against Non-Native Speakers (NNS) of English in the TESOL profession is wide-spread and well-documented, despite significant evidence of NNS contributions as TESOL educators and scholars. Several scholars have argued for the importance of aesthetic and autobiographic narratives to democratize the TESOL field and showcase varieties…

  15. Exploring How Non-Native Teachers Can Use Commonalities with Students to Teach the Target Language

    ERIC Educational Resources Information Center

    Reynolds-Case, Anne

    2012-01-01

    This article presents a qualitative study demonstrating how teachers who are non-native speakers (NNS) of the target language and who have learned the target language in a similar environment as their students can use their past learning experiences as pedagogical tools in their classes. An analysis of transcripts from classrooms with NNS and…

  16. An Investigation into Native and Non-Native Teachers' Judgments of Oral English Performance: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Kim, Youn-Hee

    2009-01-01

    This study used a mixed methods research approach to examine how native English-speaking (NS) and non-native English-speaking (NNS) teachers assess students' oral English performance. The evaluation behaviors of two groups of teachers (12 Canadian NS teachers and 12 Korean NNS teachers) were compared with regard to internal consistency, severity,…

  17. Effects of non-nutritive (artificial vs natural) sweeteners on 24-h glucose profiles.

    PubMed

    Tey, S L; Salleh, N B; Henry, C J; Forde, C G

    2017-09-01

    Replacing nutritive sweetener with non-nutritive sweeteners (NNS) has the potential to improve glycaemic control. The objective of this study was to investigate the effects of consuming artificial NNS (that is, aspartame), natural NNS (that is, monk fruit and stevia), and sucrose-sweetened beverages on 24-h glucose profiles. Ten healthy males took part in this randomised, crossover study with the following four treatments: aspartame-, monk fruit-, stevia-, and sucrose- (65 g) sweetened beverages. Participants were asked to consume the test beverage as a preload mid-morning. Medtronic iPro2 continuous glucose monitoring system was used to measure mean 24-h glucose, incremental area under the curve (iAUC) and total area under the curve (AUC) for glucose, and 24-h glycaemic variability. Overall no significant differences were found in mean 24-h glucose, iAUC and total AUC for glucose, and 24-h glycaemic variability between the four test beverages. Twenty-four-hour glucose profiles did not differ between beverages sweetened with non-nutritive (artificial vs natural) and nutritive sweeteners. The simple exchange of a single serving of sucrose-sweetened beverage with NNS over a day appears to have minimal effect on 24-h glucose profiles in healthy males.

  18. Adaptive near-optimal neuro controller for continuous-time nonaffine nonlinear systems with constrained input.

    PubMed

    Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali

    2017-09-01

    In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Technical note: an R package for fitting sparse neural networks with application in animal breeding.

    PubMed

    Wang, Yangfan; Mi, Xue; Rosa, Guilherme J M; Chen, Zhihui; Lin, Ping; Wang, Shi; Bao, Zhenmin

    2018-05-04

    Neural networks (NNs) have emerged as a new tool for genomic selection (GS) in animal breeding. However, the properties of NN used in GS for the prediction of phenotypic outcomes are not well characterized due to the problem of over-parameterization of NN and difficulties in using whole-genome marker sets as high-dimensional NN input. In this note, we have developed an R package called snnR that finds an optimal sparse structure of a NN by minimizing the square error subject to a penalty on the L1-norm of the parameters (weights and biases), therefore solving the problem of over-parameterization in NN. We have also tested some models fitted in the snnR package to demonstrate their feasibility and effectiveness to be used in several cases as examples. In comparison of snnR to the R package brnn (the Bayesian regularized single layer NNs), with both using the entries of a genotype matrix or a genomic relationship matrix as inputs, snnR has greatly improved the computational efficiency and the prediction ability for the GS in animal breeding because snnR implements a sparse NN with many hidden layers.

  20. Measuring neutron spectra in radiotherapy using the nested neutron spectrometer.

    PubMed

    Maglieri, Robert; Licea, Angel; Evans, Michael; Seuntjens, Jan; Kildea, John

    2015-11-01

    Out-of-field neutron doses resulting from photonuclear interactions in the head of a linear accelerator pose an iatrogenic risk to patients and an occupational risk to personnel during radiotherapy. To quantify neutron production, in-room measurements have traditionally been carried out using Bonner sphere systems (BSS) with activation foils and TLDs. In this work, a recently developed active detector, the nested neutron spectrometer (NNS), was tested in radiotherapy bunkers. The NNS is designed for easy handling and is more practical than the traditional BSS. Operated in current-mode, the problem of pulse pileup due to high dose-rates is overcome by measuring current, similar to an ionization chamber. In a bunker housing a Varian Clinac 21EX, the performance of the NNS was evaluated in terms of reproducibility, linearity, and dose-rate effects. Using a custom maximum-likelihood expectation-maximization algorithm, measured neutron spectra at various locations inside the bunker were then compared to Monte Carlo simulations of an identical setup. In terms of dose, neutron ambient dose equivalents were calculated from the measured spectra and compared to bubble detector neutron dose equivalent measurements. The NNS-measured spectra for neutrons at various locations in a treatment room were found to be consistent with expectations for both relative shape and absolute magnitude. Neutron fluence-rate decreased with distance from the source and the shape of the spectrum changed from a dominant fast neutron peak near the Linac head to a dominant thermal neutron peak in the moderating conditions of the maze. Monte Carlo data and NNS-measured spectra agreed within 30% at all locations except in the maze where the deviation was a maximum of 40%. Neutron ambient dose equivalents calculated from the authors' measured spectra were consistent (one standard deviation) with bubble detector measurements in the treatment room. The NNS may be used to reliably measure the neutron spectrum of a radiotherapy beam in less than 1 h, including setup and data unfolding. This work thus represents a new, fast, and practical method for neutron spectral measurements in radiotherapy.

  1. Evaluating Study Withdrawal Among Biologics and Immunomodulators in Treating Ulcerative Colitis: A Meta-analysis of Controlled Clinical Trials.

    PubMed

    Shah, Eric D; Siegel, Corey A; Chong, Kelly; Melmed, Gil Y

    2016-04-01

    We conducted a systematic review and meta-analysis to evaluate the efficacy and adverse event (AE)-associated tolerability of treatment with immunomodulators and biologics in ulcerative colitis clinical trials. We performed a literature search of PubMed and the Cochrane databases to identify randomized placebo-controlled trials of immunomodulators and biologics. Tolerability was defined through study withdrawal due to AEs and efficacy through clinical response in induction trials and clinical remission in maintenance trials. We performed meta-analyses using a random-effects model to determine relative risks (RRs) of efficacy and study withdrawal. Number needed to treat (NNT) and number needed to stop (NNS) were determined. The ratio of NNS/NNT was calculated, with a higher ratio indicating a greater number of patients in remission for every AE study discontinuation. We examined 13 single-agent trials representing biologics (infliximab, adalimumab, golimumab, and vedolizumab) and immunomodulators (tacrolimus and azathioprine). Induction therapy did not result in excess study withdrawal with immunomodulators (RR = 0.9, 95% CI 0.1-12.0) or biologics (RR = 0.7, 95% CI 0.3-1.8), therefore the NNS/NNT ratio could not be assessed because of high tolerability. Maintenance immunomodulator therapy resulted in a NNS of 14 (RR = 2.8, 95% CI 0.7-10.5) and NNS/NNT ratio of 2.4 in 2 trials. Biologics did not result in excess study withdrawal in maintenance (RR = 0.7, 95% CI 0.3-1.7) or combined induction-and-maintenance (RR = 0.6, 95% CI 0.4-1.0) trials. Biologics were not associated with a higher RR of study withdrawal due to AE than placebo. There were insufficient data to compare these results with immunomodulators.

  2. Synthetic orocutaneous stimulation entrains preterm infants with feeding difficulties to suck

    PubMed Central

    Barlow, SM; Finan, DS; Lee, J; Chu, S

    2013-01-01

    Background Prematurity can disrupt the development of a specialized neural circuit known as suck central pattern generator (sCPG), which often leads to poor feeding skills. The extent to which suck can be entrained using a synthetically patterned orocutaneous input to promote its development in preterm infants who lack a functional suck is unknown. Objective To evaluate the effects of a new motorized ‘pulsating’ pacifier capable of entraining the sCPG in tube-fed premature infants who lack a functional suck and exhibit feeding disorders. Methods Prospective cohort study of 31 preterm infants assigned to either the oral patterned entrainment intervention (study) or non-treated (controls) group, matched by gestational age, birth weight, oxygen supplementation history, and oral feed status. Study infants received a daily regimen of orocutaneous pulse trains through a pneumatically-controlled silicone pacifier concurrent with gavage feeds. Results The patterned orocutaneous stimulus was highly effective in accelerating the development of NNS in preterm infants. A repeated-measure multivariate analysis of covariance revealed significant increases in minute-rates for total oral compressions, NNS bursts, and NNS cycles, suck cycles per burst, and the ratiometric measure of NNS cycles as a percentage of total ororhythmic output. Moreover, study infants also manifest significantly greater success at achieving oral feeds, surpassing their control counterparts by a factor of 3.1× (72.8% daily oral feed versus 23.3% daily oral feed, respectively). Conclusion Functional expression of the sCPG among preterm infants who lack an organized suck can be induced through the delivery of synthetically patterned orocutaneous pulse trains. The rapid emergence of NNS in treated infants is accompanied by a significant increase in the proportion of nutrient taken orally. PMID:18548084

  3. How many individuals will need to be screened to increase colorectal cancer screening prevalence to 80% by 2018?

    PubMed

    Fedewa, Stacey A; Ma, Jiemin; Sauer, Ann Goding; Siegel, Rebecca L; Smith, Robert A; Wender, Richard C; Doroshenk, Mary K; Brawley, Otis W; Ward, Elizabeth M; Jemal, Ahmedin

    2015-12-01

    A recent study estimates that 277,000 colorectal cancer (CRC) cases and 203,000 CRC deaths will be averted between 2013 and 2030 if the National Colorectal Cancer Roundtable goal of increasing CRC screening prevalence to 80% by 2018 is reached. However, the number of individuals who need to be screened (NNS) to achieve this goal is unknown. In this communication, the authors estimate the NNS to achieve 80% by 2018 nationwide and by state. The authors estimated the NNS by subtracting adults aged 50 to 75 years who would need to be screened to achieve an 80% CRC screening prevalence from the number who are currently guideline-compliant from population estimates for this age group. The 2013 National Health Interview Survey and the 2012 Behavioral Risk Factor Surveillance System were used to estimate CRC screening prevalence and data from the US Census Bureau were used to estimate population projections. The NNS were age-standardized and sex-standardized. Nationwide, 24.39 million individuals (95% confidence interval, 24.37-24.41 million) aged 50 to 75 years will need to be screened to achieve 80% by 2018. By state, the NNS ranged from 45,400 in Vermont to 2.72 million in California. The majority of individuals who need to be screened are aged 50 to 64 years and the largest subgroup is privately insured. The authors estimated that at least 24.4 million additional individuals in the United States will need to be screened to achieve the National Colorectal Cancer Roundtable goal of increasing CRC screening prevalence to 80% by 2018. To reach this goal, improving facilitators of CRC screening, including physician recommendation and patient awareness, is needed. © 2015 American Cancer Society.

  4. Predictive Feature Selection for Genetic Policy Search

    DTIC Science & Technology

    2014-05-22

    inverted pendulum balancing problem (Gomez and Miikkulainen, 1999), where the agent must learn a policy in a continuous state space using discrete...algorithms to automate the process of training and/or designing NNs, mitigate these drawbacks and allow NNs to be easily applied to RL domains (Sher, 2012...racing simulator and the double inverted pendulum balance environments. It also includes parameter settings for all algorithms included in the study

  5. Deriving Flood-Mediated Connectivity between River Channels and Floodplains: Data-Driven Approaches

    NASA Astrophysics Data System (ADS)

    Zhao, Tongtiegang; Shao, Quanxi; Zhang, Yongyong

    2017-03-01

    The flood-mediated connectivity between river channels and floodplains plays a fundamental role in flood hazard mapping and exerts profound ecological effects. The classic nearest neighbor search (NNS) fails to derive this connectivity because of spatial heterogeneity and continuity. We develop two novel data-driven connectivity-deriving approaches, namely, progressive nearest neighbor search (PNNS) and progressive iterative nearest neighbor search (PiNNS). These approaches are illustrated through a case study in Northern Australia. First, PNNS and PiNNS are employed to identify flood pathways on floodplains through forward tracking. That is, progressive search is performed to associate newly inundated cells in each time step to previously inundated cells. In particular, iterations in PiNNS ensure that the connectivity is continuous - the connection between any two cells along the pathway is built through intermediate inundated cells. Second, inundated floodplain cells are collectively connected to river channel cells through backward tracing. Certain river channel sections are identified to connect to a large number of inundated floodplain cells. That is, the floodwater from these sections causes widespread floodplain inundation. Our proposed approaches take advantage of spatial-temporal data. They can be applied to achieve connectivity from hydro-dynamic and remote sensing data and assist in river basin planning and management.

  6. Bringing state-of-the-art diagnostics to vulnerable populations: The use of a mobile screening unit in active case finding for tuberculosis in Palawan, the Philippines.

    PubMed

    Morishita, Fukushi; Garfin, Anna Marie Celina Gonzales; Lew, Woojin; Oh, Kyung Hyun; Yadav, Rajendra-Prasad; Reston, Janeth Cuencaho; Infante, Lenie Lucio; Acala, Maria Rebethia Crueldad; Palanca, Dean Lim; Kim, Hee Jin; Nishikiori, Nobuyuki

    2017-01-01

    Globally, case detection of tuberculosis (TB) has stabilized in recent years. Active case finding (ACF) has regained an increased attention as a complementary strategy to fill the case detection gap. In the Philippines, the DetecTB project implemented an innovative ACF strategy that offered a one-stop diagnostic service with a mobile unit equipped with enhanced diagnostic tools including chest X-ray (CXR) and Xpert®MTB/RIF (Xpert). The project targeted the rural poor, the urban poor, prison inmates, indigenous population and high school students. This is a retrospective review of TB screening data from 25,103 individuals. A descriptive analysis was carried out to compare screening and treatment outcomes across target populations. Univariate and multivariate analyses were performed to identify predictors of TB for each population. The composition of bacteriologically-confirmed cases by smear and symptom status was further investigated. The highest yield with lowest number needed to screen (NNS) was found in prison (6.2%, NNS: 16), followed by indigenous population (2.9%, NNS: 34), the rural poor (2.2%, NNS: 45), the urban poor (2.1%, NNS: 48), and high school (0.2%, NNS: 495). The treatment success rate for all populations was high with 89.5% in rifampicin-susceptible patients and 83.3% in rifampicin-resistant patients. A relatively higher loss to follow-up rate was observed in indigenous population (7.5%) and the rural poor (6.4%). Only cough more than two weeks showed a significant association with TB diagnosis in all target populations (Adjusted Odds Ratio ranging from 1.71 to 6.73) while other symptoms and demographic factors varied in their strength of association. The urban poor had the highest proportion of smear-positive patients with cough more than two weeks (72.0%). The proportion of smear-negative (Xpert-positive) patients without cough more than two weeks was the highest in indigenous population (39.3%), followed by prison inmates (27.7%), and the rural poor (22.8%). The innovative ACF strategy using mobile unit yielded a substantial number of TB patients and achieved successful treatment outcomes. TB screening in prison, indigenous population, and urban and rural poor communities was found to be effective. The combined use of CXR and Xpert largely contributed to increased case detection.

  7. Non-nutritive sweeteners: review and update.

    PubMed

    Shankar, Padmini; Ahuja, Suman; Sriram, Krishnan

    2013-01-01

    Obesity has become an epidemic, not just in the United States, but also across the globe. Obesity is a result of many factors including poor dietary habits, inadequate physical activity, hormonal issues, and sedentary lifestyle, as well as many psychological issues. Direct and indirect costs associated with obesity-related morbidity and mortality have been estimated to be in the billions of dollars. Of the many avenues for treatment, dietary interventions are the most common. Numerous diets have been popularized in the media, with most being fads having little to no scientific evidence to validate their effectiveness. Amidst this rise of weight loss diets, there has been a surge of individual products advertised as assuring quick weight loss; one such product group is non-nutritive sweeteners (NNS). Sugar, a common component of our diet, is also a major contributing factor to a number of health problems, including obesity and increased dental diseases both in adults and children. Most foods marketed towards children are sugar-laden. Obesity-related health issues, such as type 2 diabetes mellitus, cardiovascular diseases, and hypertension, once only commonly seen in older adults, are increasing in youth. Manufacturers of NNS are using this as an opportunity to promote their products, and are marketing them as safe for all ages. A systematic review of several databases and reliable websites on the internet was conducted to identify literature related to NNS. Keywords that were used individually or in combination included, but were not limited to, artificial sweeteners, non-nutritive sweeteners, non-caloric sweeteners, obesity, sugar substitutes, diabetes, and cardiometabolic indicators. The clinical and epidemiologic data available at present are insufficient to make definitive conclusions regarding the benefits of NNS in displacing caloric sweeteners as related to energy balance, maintenance or decrease in body weight, and other cardiometabolic risk factors. Although the FDA and most published (especially industry-funded) studies endorse the safety of these additives, there is a lack of conclusive evidence-based research to discourage or to encourage their use on a regular basis. While moderate use of NNS may be useful as a dietary aid for someone with diabetes or on a weight loss regimen, for optimal health it is recommended that only minimal amounts of both sugar and NNS be consumed. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Bringing state-of-the-art diagnostics to vulnerable populations: The use of a mobile screening unit in active case finding for tuberculosis in Palawan, the Philippines

    PubMed Central

    Morishita, Fukushi; Garfin, Anna Marie Celina Gonzales; Lew, Woojin; Oh, Kyung Hyun; Yadav, Rajendra-Prasad; Reston, Janeth Cuencaho; Infante, Lenie Lucio; Acala, Maria Rebethia Crueldad; Palanca, Dean Lim; Kim, Hee Jin; Nishikiori, Nobuyuki

    2017-01-01

    Background Globally, case detection of tuberculosis (TB) has stabilized in recent years. Active case finding (ACF) has regained an increased attention as a complementary strategy to fill the case detection gap. In the Philippines, the DetecTB project implemented an innovative ACF strategy that offered a one-stop diagnostic service with a mobile unit equipped with enhanced diagnostic tools including chest X-ray (CXR) and Xpert®MTB/RIF (Xpert). The project targeted the rural poor, the urban poor, prison inmates, indigenous population and high school students. Methods This is a retrospective review of TB screening data from 25,103 individuals. A descriptive analysis was carried out to compare screening and treatment outcomes across target populations. Univariate and multivariate analyses were performed to identify predictors of TB for each population. The composition of bacteriologically-confirmed cases by smear and symptom status was further investigated. Results The highest yield with lowest number needed to screen (NNS) was found in prison (6.2%, NNS: 16), followed by indigenous population (2.9%, NNS: 34), the rural poor (2.2%, NNS: 45), the urban poor (2.1%, NNS: 48), and high school (0.2%, NNS: 495). The treatment success rate for all populations was high with 89.5% in rifampicin-susceptible patients and 83.3% in rifampicin-resistant patients. A relatively higher loss to follow-up rate was observed in indigenous population (7.5%) and the rural poor (6.4%). Only cough more than two weeks showed a significant association with TB diagnosis in all target populations (Adjusted Odds Ratio ranging from 1.71 to 6.73) while other symptoms and demographic factors varied in their strength of association. The urban poor had the highest proportion of smear-positive patients with cough more than two weeks (72.0%). The proportion of smear-negative (Xpert-positive) patients without cough more than two weeks was the highest in indigenous population (39.3%), followed by prison inmates (27.7%), and the rural poor (22.8%). Conclusions The innovative ACF strategy using mobile unit yielded a substantial number of TB patients and achieved successful treatment outcomes. TB screening in prison, indigenous population, and urban and rural poor communities was found to be effective. The combined use of CXR and Xpert largely contributed to increased case detection. PMID:28152082

  9. Non-nutritive sweeteners are not super-normal stimuli

    PubMed Central

    Antenucci, Rachel G.; Hayes, John E.

    2014-01-01

    Background It is often claimed that non-nutritive sweeteners (NNS) are ‘sweeter than sugar’, with the implicit implication high potency sweeteners are super-normal stimuli that encourage exaggerated responses. This study aimed to investigate the perceived sweetness intensity of a variety of nutritive (Sucrose, Maple Syrup, and Agave Nectar) and NNS (Acesulfame-K (AceK), Rebaudioside A (RebA), Aspartame, and Sucralose) in a large cohort of untrained participants using contemporary psychophysical methods. Methods Participants (n=401 total) rated the intensity of sweet, bitter, and metallic sensations for nutritive and NNS in water using the general labeled magnitude scale (gLMS). Results Sigmoidal Dose-Response functions were observed for all stimuli except AceK. That is, sucrose follows a sigmoidal function if the data are not artifactually linearized via prior training. More critically, there is no evidence that NNS have a maximal sweetness (intensity) greater than sucrose; indeed, the maximal sweetness for AceK, RebA and Sucralose were significantly lower than for concentrated sucrose. For these sweeteners, mixture suppression due to endogenous dose-dependent bitter or metallic sensations appears to limit maximal perceived sweetness. Conclusions In terms of perceived sweetness, non-nutritive sweeteners cannot be considered super-normal stimuli. These data do not support the view that non-nutritive sweeteners hijack or over-stimulate sweet receptors to product elevated sweet sensations. PMID:24942868

  10. Artificial sweeteners as a sugar substitute: Are they really safe?

    PubMed

    Sharma, Arun; Amarnath, S; Thulasimani, M; Ramaswamy, S

    2016-01-01

    Nonnutritive sweeteners (NNS) have become an important part of everyday life and are increasingly used nowadays in a variety of dietary and medicinal products. They provide fewer calories and far more intense sweetness than sugar-containing products and are used by a plethora of population subsets for varying objectives. Six of these agents (aspartame, saccharine, sucralose, neotame, acesulfame-K, and stevia) have previously received a generally recognized as safe status from the United States Food and Drug Administration, and two more (Swingle fruit extract and advantame) have been added in the recent years to this ever growing list. They are claimed to promote weight loss and deemed safe for consumption by diabetics; however, there is inconclusive evidence to support most of their uses and some recent studies even hint that these earlier established benefits regarding NNS use might not be true. There is a lack of properly designed randomized controlled studies to assess their efficacy in different populations, whereas observational studies often remain confounded due to reverse causality and often yield opposite findings. Pregnant and lactating women, children, diabetics, migraine, and epilepsy patients represent the susceptible population to the adverse effects of NNS-containing products and should use these products with utmost caution. The overall use of NNS remains controversial, and consumers should be amply informed about the potential risks of using them, based on current evidence-based dietary guidelines.

  11. Artificial sweeteners as a sugar substitute: Are they really safe?

    PubMed Central

    Sharma, Arun; Amarnath, S.; Thulasimani, M.; Ramaswamy, S.

    2016-01-01

    Nonnutritive sweeteners (NNS) have become an important part of everyday life and are increasingly used nowadays in a variety of dietary and medicinal products. They provide fewer calories and far more intense sweetness than sugar-containing products and are used by a plethora of population subsets for varying objectives. Six of these agents (aspartame, saccharine, sucralose, neotame, acesulfame-K, and stevia) have previously received a generally recognized as safe status from the United States Food and Drug Administration, and two more (Swingle fruit extract and advantame) have been added in the recent years to this ever growing list. They are claimed to promote weight loss and deemed safe for consumption by diabetics; however, there is inconclusive evidence to support most of their uses and some recent studies even hint that these earlier established benefits regarding NNS use might not be true. There is a lack of properly designed randomized controlled studies to assess their efficacy in different populations, whereas observational studies often remain confounded due to reverse causality and often yield opposite findings. Pregnant and lactating women, children, diabetics, migraine, and epilepsy patients represent the susceptible population to the adverse effects of NNS-containing products and should use these products with utmost caution. The overall use of NNS remains controversial, and consumers should be amply informed about the potential risks of using them, based on current evidence-based dietary guidelines. PMID:27298490

  12. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Non-nutritive sucking for gastro-oesophageal reflux disease in preterm and low birth weight infants.

    PubMed

    Psaila, Kim; Foster, Jann P; Richards, Robyn; Jeffery, Heather E

    2014-10-15

    Gastro-oesophageal reflux (GOR) is commonly diagnosed in the neonatal population (DiPietro 1994), and generally causes few or no symptoms (Vandenplas 2009). Conversely, gastro-oesophageal reflux disease (GORD) refers to GOR that causes troublesome symptoms with or without complications such as damage to the oesophagus (Vandenplas 2009). Currently there is no evidence to support the range of measures recommended to help alleviate acid reflux experienced by infants. Non-nutritive sucking (NNS) has been used as an intervention to modulate neonatal state behaviours through its pacifying effects such as decrease infant fussiness and crying during feeds (Boiron 2007; Pickler 2004). To determine if NNS reduces GORD in preterm infants (less than 37 weeks' gestation) and low birth weight (less than 2500 g) infants, three months of age and less, with signs or symptoms suggestive of GORD, or infants with a diagnosis of GORD. We performed computerised searches of the electronic databases of the Cochrane Central Register of Controlled Trials (CENTRAL) (Issue 9, 2013), MEDLINE (1966 to September 2013), CINAHL (1982 to September 2013), and EMBASE (1988 to September 2013). We applied no language restrictions. Controlled trials using random or quasi-random allocation of preterm infants (less than 37 weeks' gestation) and low birth weight (less than 2500 g) infants three months of age and less with signs or symptoms suggestive of GORD, or infants with a diagnosis of GORD. We included studies reported only by abstracts, and cluster and cross-over randomised trials. Two review authors independently reviewed and selected trials from searches, assessed and rated study quality and extracted relevant data. We identified two studies from the initial search. After further review, we excluded both studies. We identified no studies examining the effects of NNS for GORD in preterm and low birth weight infants There was insufficient evidence to determine the effectiveness of NNS for GORD. Adequately powered RCTs on the effect of NNS in preterm and low birth weight infants diagnosed with GORD are required.

  14. Quality of nutrition services in primary health care facilities: Implications for integrating nutrition into the health system in Bangladesh

    PubMed Central

    Saha, Kuntal Kumar; Chowdhury, Ashfaqul Haq; Garnett, Sarah P.; Arifeen, Shams El; Menon, Purnima

    2017-01-01

    Background In 2011, the Bangladesh Government introduced the National Nutrition Services (NNS) by leveraging the existing health infrastructure to deliver nutrition services to pregnant woman and children. This study examined the quality of nutrition services provided during antenatal care (ANC) and management of sick children younger than five years. Methods Service delivery quality was assessed across three dimensions; structural readiness, process and outcome. Structural readiness was assessed by observing the presence of equipment, guidelines and register/reporting forms in ANC rooms and consulting areas for sick children at 37 primary healthcare facilities in 12 sub-districts. In addition, the training and knowledge relevant to nutrition service delivery of 95 healthcare providers was determined. The process of nutrition service delivery was assessed by observing 381 ANC visits and 826 sick children consultations. Satisfaction with the service was the outcome and was determined by interviewing 541 mothers/caregivers of sick children. Results Structural readiness to provide nutrition services was higher for ANC compared to management of sick children; 73% of ANC rooms had >5 of the 13 essential items while only 13% of the designated areas for management of sick children had >5 of the 13 essential items. One in five (19%) healthcare providers had received nutrition training through the NNS. Delivery of the nutrition services was poor: <30% of women received all four key antenatal nutrition services, 25% of sick children had their weight checked against a growth-chart and <1% had their height measured. Nevertheless, most mothers/caregivers rated their satisfaction of the service above average. Conclusions Strengthening the provision of equipment and increasing the coverage of training are imperative to improve nutrition services. Inherent barriers to implementing nutrition services in primary health care, especially high caseloads during the management of sick under-five children, should be considered to identify alternative and appropriate service delivery platforms before nationwide scale up. PMID:28542530

  15. Development and validation of risk models to select ever-smokers for CT lung-cancer screening

    PubMed Central

    Katki, Hormuzd A.; Kovalchik, Stephanie A.; Berg, Christine D.; Cheung, Li C.; Chaturvedi, Anil K.

    2016-01-01

    Importance The US Preventive Services Task Force (USPSTF) recommends computed-tomography (CT) lung-cancer screening for ever-smokers ages 55-80 years who smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung-cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Objective Comparison of modeled outcomes from risk-based CT lung-screening strategies versus USPSTF recommendations. Design/Setting/Participants Empirical risk models for lung-cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age, education, sex, race, smoking intensity/duration/quit-years, Body Mass Index, family history of lung-cancer, and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the US. Models applied to US ever-smokers ages 50-80 (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung-screening, assuming screening for all ever-smokers yields the percent changes in lung-cancer detection and death observed in the NLST. Exposure Annual CT lung-screening for 3 years. Main Outcomes and Measures Model validity: calibration (number of model-predicted cases divided by number of observed cases (Estimated/Observed)) and discrimination (Area-Under-Curve (AUC)). Modeled screening outcomes: estimated number of screen-avertable lung-cancer deaths, estimated screening effectiveness (number needed to screen (NNS) to prevent 1 lung-cancer death). Results Lung-cancer incidence and death risk models were well-calibrated in PLCO and NLST. The lung-cancer death model calibrated and discriminated well for US ever-smokers ages 50-80 (NHIS 1997-2001: Estimated/Observed=0.94, 95%CI=0.84-1.05; AUC=0.78, 95%CI=0.76-0.80). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung-cancer screening and 46,488 (95%CI=43,924-49,053) lung-cancer deaths were estimated as screen-avertable over 5 years (estimated NNS=194, 95%CI=187-201). In contrast, risk-based selection screening the same number of ever-smokers (9.0 million) at highest 5-year lung-cancer risk (≥1.9%), was estimated to avert 20% more deaths (55,717; 95%CI=53,033-58,400) and was estimated to reduce the estimated NNS by 17% (NNS=162, 95%CI=157-166). Conclusions and Relevance Among a cohort of US ever-smokers age 50-80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung-cancer deaths prevented over 5 years along with a lower NNS to prevent 1 lung-cancer death. PMID:27179989

  16. Neural network retrieval of soil moisture: application to SMOS

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Aires, Filipe; Prigent, Catherine; Kerr, Yann; Kolasssa, Jana; Jimenez, Carlos; Cabot, Francois; Mahmoodi, Ali

    2014-05-01

    We present an efficient statistical soil moisture (SM) retrieval method using SMOS brightness temperatures (BTs) complemented with MODIS NDVI and ASCAT backscattering data. The method is based on a feed-forward neural network (hereafter NN) trained with SM from ECMWF model predictions or from the SMOS operational algorithm. The best compromise to retrieve SM with NNs from SMOS brightness temperatures in a large fraction of the swath (~ 670 km) is to use incidence angles from 25 to 60 degrees (in 7 bins of 5 deg width) for both H and V polarizations. The correlation coefficient (R) of the SM retrieved by the NN and the reference SM dataset (ECMWF or SMOS L3) is 0.8. The correlation coefficient increases to 0.91 when adding as input MODIS NDVI, ECOCLIMAP sand and clay fractions and one of the following data: (i) active microwaves observations (ASCAT backscattering coefficient at 40 deg incidence angle), (ii) ECMWF soil temperature. Finally, the correlation coefficient increases to R=0.94 when using a normalization index computed locally for each latitude-longitude point with the maximum and minimum BTs and the associated SM values from the local time series. Global maps of SM obtained with NNs reproduce well the spatial structures present in the reference SM datasets, implying that the NN works well for a wide range of ecosystems and physical conditions. In addition, the results of the NNs have been evaluated at selected locations for which in situ measurements are available such as the USDA-ARS watersheds (USA), the OzNet network (AUS) and USDA-NRCS SCAN network (USA). The time series of SM obtained with NNs reproduce the temporal behavior measured with in situ sensors. For well known sites where the in situ measurement is representative of a 40 km scale like the Little Washita watershed, the NN models show a very high correlation of (R = 0.8-0.9) and a low standard deviation of 0.02-0.04 m3/m3 with respect to the in situ measurements. When comparing with all the in situ stations, the average correlation coefficients and bias of NN SM with respect to in situ measurements are comparable to those of ECMWF and SMOS L3 SM (R = 0.6). The standard deviation of the difference (STTD) of those products with respect to in situ measurements is lower for NN SM, in particular for the NN models that use local information on the extreme BTs and associated SM values, for which average STDD is 0.03 m3/m3, twice as low as the average STDD values obtained with ECMWF and L3 SM (0.05-0.07 m3/m3). In conclusion, SM obtained using NN give results of comparable or better quality to other SM products. In addition, NNs are an efficient method to obtain SM from SMOS data (one year of SMOS observations can be inverted in less than 60 seconds). These results have been obtained in the framework of the SMOS+NN project funded by ESA and they open interesting perspectives such as a near real time processor and data assimilation in weather prediction models.

  17. Development and Validation of Risk Models to Select Ever-Smokers for CT Lung Cancer Screening.

    PubMed

    Katki, Hormuzd A; Kovalchik, Stephanie A; Berg, Christine D; Cheung, Li C; Chaturvedi, Anil K

    2016-06-07

    The US Preventive Services Task Force (USPSTF) recommends computed tomography (CT) lung cancer screening for ever-smokers aged 55 to 80 years who have smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Comparison of modeled outcomes from risk-based CT lung-screening strategies vs USPSTF recommendations. Empirical risk models for lung cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age; education; sex; race; smoking intensity, duration, and quit-years; body mass index; family history of lung cancer; and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the United States. Models were applied to US ever-smokers aged 50 to 80 years (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung screening, assuming screening for all ever-smokers, yield the percent changes in lung cancer detection and death observed in the NLST. Annual CT lung screening for 3 years beginning at age 50 years. For model validity: calibration (number of model-predicted cases divided by number of observed cases [estimated/observed]) and discrimination (area under curve [AUC]). For modeled screening outcomes: estimated number of screen-avertable lung cancer deaths and estimated screening effectiveness (number needed to screen [NNS] to prevent 1 lung cancer death). Lung cancer incidence and death risk models were well calibrated in PLCO and NLST. The lung cancer death model calibrated and discriminated well for US ever-smokers aged 50 to 80 years (NHIS 1997-2001: estimated/observed = 0.94 [95%CI, 0.84-1.05]; AUC, 0.78 [95%CI, 0.76-0.80]). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung cancer screening and 46,488 (95% CI, 43,924-49,053) lung cancer deaths were estimated as screen-avertable over 5 years (estimated NNS, 194 [95% CI, 187-201]). In contrast, risk-based selection screening of the same number of ever-smokers (9.0 million) at highest 5-year lung cancer risk (≥1.9%) was estimated to avert 20% more deaths (55,717 [95% CI, 53,033-58,400]) and was estimated to reduce the estimated NNS by 17% (NNS, 162 [95% CI, 157-166]). Among a cohort of US ever-smokers aged 50 to 80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung cancer deaths prevented over 5 years, along with a lower NNS to prevent 1 lung cancer death.

  18. Navy Virginia (SSN 774) Class Attack Submarine Procurement: Background and Issues for Congress

    DTIC Science & Technology

    2016-10-25

    proposed plan include the following:  GD/EB is to be the prime contractor for designing and building Ohio replacement boats;  HII/NNS is to be a...boats, and HII/NNS would receive 22%-23%;  GD/EB is to continue as prime contractor for the Virginia-class program, but to help balance out projected...execute this strategy, GDEB has been selected as the prime contractor for OR with the responsibilities to deliver the twelve OR [Ohio replacement

  19. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  20. Molecular Mechanisms of Innate Immune Inhibition by Non-Segmented Negative-Sense RNA Viruses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Srirupa; Basler, Christopher F.; Amarasinghe, Gaya K.

    The host innate immune system serves as the first line of defense against viral infections. Germline-encoded pattern recognition receptors detect molecular patterns associated with pathogens and activate innate immune responses. Of particular relevance to viral infections are those pattern recognition receptors that activate type I interferon responses, which establish an antiviral state. The order Mononegavirales is composed of viruses that possess single-stranded, non-segmented negative-sense (NNS) RNA genomes and are important human pathogens that consistently antagonize signaling related to type I interferon responses. NNS viruses have limited encoding capacity compared to many DNA viruses, and as a likely consequence, most openmore » reading frames encode multifunctional viral proteins that interact with host factors in order to evade host cell defenses while promoting viral replication. In this review, we will discuss the molecular mechanisms of innate immune evasion by select NNS viruses. A greater understanding of these interactions will be critical in facilitating the development of effective therapeutics and viral countermeasures.« less

  1. A novel constructive-optimizer neural network for the traveling salesman problem.

    PubMed

    Saadatmand-Tarzjan, Mahdi; Khademi, Morteza; Akbarzadeh-T, Mohammad-R; Moghaddam, Hamid Abrishami

    2007-08-01

    In this paper, a novel constructive-optimizer neural network (CONN) is proposed for the traveling salesman problem (TSP). CONN uses a feedback structure similar to Hopfield-type neural networks and a competitive training algorithm similar to the Kohonen-type self-organizing maps (K-SOMs). Consequently, CONN is composed of a constructive part, which grows the tour and an optimizer part to optimize it. In the training algorithm, an initial tour is created first and introduced to CONN. Then, it is trained in the constructive phase for adding a number of cities to the tour. Next, the training algorithm switches to the optimizer phase for optimizing the current tour by displacing the tour cities. After convergence in this phase, the training algorithm switches to the constructive phase anew and is continued until all cities are added to the tour. Furthermore, we investigate a relationship between the number of TSP cities and the number of cities to be added in each constructive phase. CONN was tested on nine sets of benchmark TSPs from TSPLIB to demonstrate its performance and efficiency. It performed better than several typical Neural networks (NNs), including KNIES_TSP_Local, KNIES_TSP_Global, Budinich's SOM, Co-Adaptive Net, and multivalued Hopfield network as wall as computationally comparable variants of the simulated annealing algorithm, in terms of both CPU time and accuracy. Furthermore, CONN converged considerably faster than expanding SOM and evolved integrated SOM and generated shorter tours compared to KNIES_DECOMPOSE. Although CONN is not yet comparable in terms of accuracy with some sophisticated computationally intensive algorithms, it converges significantly faster than they do. Generally speaking, CONN provides the best compromise between CPU time and accuracy among currently reported NNs for TSP.

  2. Review of Electromagnetic Frequency (EMF) Safety Program for Homestead ARB, FL

    DTIC Science & Technology

    2012-11-28

    using technique number MSL-7. Calibrations below I GHz were perfonned in a electrical characterist.ics of the cell and the measured net power...Altimeter ( CARA ) 1 per F-16 4 .2-4.2 GHz 100/10 0 .8 2 .4 482 AMXS/MXAAS AN/APG68 Fire Control Radar (FCR) 1 per F-1 6 Classified Classified Classified...A. MPE fot· Upper T ier Electric Magnetic field Powet· Averaging Fr equency Field - nns strength - n ns Density - nns (S) time R ange (t) (E

  3. Hybrid NN/SVM Computational System for Optimizing Designs

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2009-01-01

    A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily oversimplified to fit the scope of this article, an SVM can be characterized as an algorithm that (1) effects a nonlinear mapping of input vectors into a higher-dimensional feature space and (2) involves a dual formulation of governing equations and constraints. One advantageous feature of the SVM approach is that an objective function (which one seeks to minimize to obtain coefficients that define an SVM mathematical model) is convex, so that unlike in the cases of many NN models, any local minimum of an SVM model is also a global minimum.

  4. Child anthropometry data quality from Demographic and Health Surveys, Multiple Indicator Cluster Surveys, and National Nutrition Surveys in the West Central Africa region: are we comparing apples and oranges?

    PubMed Central

    Corsi, Daniel J.; Perkins, Jessica M.; Subramanian, S. V.

    2017-01-01

    ABSTRACT Background: There has been limited work comparing survey characteristics and assessing the quality of child anthropometric data from population-based surveys. Objective: To investigate survey characteristics and indicators of quality of anthropometric data in children aged 0–59 months from 23 countries in the West Central Africa region. Methods: Using established methodologies and criteria to examine child age, sex, height, and weight, we conducted a comprehensive assessment and scoring of the quality of anthropometric data collected in 100 national surveys. Results: The Multiple Indicator Cluster Surveys (MICS) and Demographic and Health Surveys (DHS) collected data from a greater number of younger children than older children while the opposite was found for the National Nutrition Surveys (NNS). Missing or implausible height/weight data proportions were 12% and 8% in MICS and DHS compared to 3% in NNS. Average data quality scores were 14 in NNS, 33 in DHS, and 41 in MICS. Conclusions: Although our metric of data quality suggests that data from the NNS appear more consistent and robust, it is equally important to consider its disadvantages related to access and lack of broader socioeconomic information. In comparison, the DHS and MICS are publicly-accessable for research and provide socioeconomic context essential for assessing and addressing the burden of undernutrition within and between countries. The strengths and weaknesses of data from these three sources should be carefully considered when seeking to determine the burden of child undernutrition and its variation within countries. PMID:28641057

  5. Force spectroscopy of quadruple H-bonded dimers by AFM: dynamic bond rupture and molecular time-temperature superposition.

    PubMed

    Zou, Shan; Schönherr, Holger; Vancso, G Julius

    2005-08-17

    We report on the application of the time-temperature superposition principle to supramolecular bond-rupture forces on the single-molecule level. The construction of force-loading rate master curves using atomic force microscopy (AFM)-based single-molecule force spectroscopy (SMFS) experiments carried out in situ at different temperatures allows one to extend the limited range of the experimentally accessible loading rates and hence to cross from thermodynamic nonequilibrium to quasi-equilibrium states. The approach is demonstrated for quadruple H-bonded ureido-4[1H]-pyrimidinone (UPy) moieties studied by variable-temperature SMFS in organic media. The unbinding forces of single quadruple H-bonding (UPy)2 complexes, which were identified based on a polymeric spacer strategy, were found to depend on the loading rate in the range of 5 nN/s to 500 nN/s at 301 K in hexadecane. By contrast, these rupture forces were independent of the loading rate from 5 to 200 nN/s at 330 K. These results indicate that the unbinding behavior of individual supramolecular complexes can be directly probed under both thermodynamic nonequilibrium and quasi-equilibrium conditions. On the basis of the time-temperature superposition principle, a master curve was constructed for a reference temperature of 301 K, and the crossover force (from loading-rate independent to -dependent regimes) was determined as approximately 145 pN (at a loading rate of approximately 5.6 nN/s). This approach significantly broadens the accessible loading-rate range and hence provides access to fine details of potential energy landscape of supramolecular complexes based on SMFS experiments.

  6. Child anthropometry data quality from Demographic and Health Surveys, Multiple Indicator Cluster Surveys, and National Nutrition Surveys in the West Central Africa region: are we comparing apples and oranges?

    PubMed

    Corsi, Daniel J; Perkins, Jessica M; Subramanian, S V

    2017-01-01

    There has been limited work comparing survey characteristics and assessing the quality of child anthropometric data from population-based surveys. To investigate survey characteristics and indicators of quality of anthropometric data in children aged 0-59 months from 23 countries in the West Central Africa region. Using established methodologies and criteria to examine child age, sex, height, and weight, we conducted a comprehensive assessment and scoring of the quality of anthropometric data collected in 100 national surveys. The Multiple Indicator Cluster Surveys (MICS) and Demographic and Health Surveys (DHS) collected data from a greater number of younger children than older children while the opposite was found for the National Nutrition Surveys (NNS). Missing or implausible height/weight data proportions were 12% and 8% in MICS and DHS compared to 3% in NNS. Average data quality scores were 14 in NNS, 33 in DHS, and 41 in MICS. Although our metric of data quality suggests that data from the NNS appear more consistent and robust, it is equally important to consider its disadvantages related to access and lack of broader socioeconomic information. In comparison, the DHS and MICS are publicly-accessable for research and provide socioeconomic context essential for assessing and addressing the burden of undernutrition within and between countries. The strengths and weaknesses of data from these three sources should be carefully considered when seeking to determine the burden of child undernutrition and its variation within countries.

  7. Some linguistic and pragmatic considerations affecting science reporting in English by non-native speakers of the language

    PubMed Central

    2012-01-01

    Approximately 50% of publications in English peer reviewed journals are contributed by non-native speakers (NNS) of the language. Basic thought processes are considered to be universal yet there are differences in thought patterns and particularly in discourse management of writers with different linguistic and cultural backgrounds. The study highlights some areas of potential incompatibility in native and NNS processing of English scientific papers. Principles and conventions in generating academic discourse are considered in terms of frequently occurring failures of NNS to meet expectations of editors, reviewers, and readers. Major problem areas concern organization and flow of information, principles of cohesion and clarity, cultural constraints, especially those of politeness and negotiability of ideas, and the complicated area of English modality pragmatics. The aim of the paper is to sensitize NN authors of English academic reports to problem areas of discourse processing which are stumbling blocks, often affecting acceptance of manuscripts. The problems discussed are essential for acquiring pragmalinguistic and sociocultural competence in producing effective communication. PMID:23118596

  8. Approximate N-Player Nonzero-Sum Game Solution for an Uncertain Continuous Nonlinear System.

    PubMed

    Johnson, Marcus; Kamalapurkar, Rushikesh; Bhasin, Shubhendu; Dixon, Warren E

    2015-08-01

    An approximate online equilibrium solution is developed for an N -player nonzero-sum game subject to continuous-time nonlinear unknown dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier structure is used, wherein a robust dynamic neural network is used to asymptotically identify the uncertain system with additive disturbances, and a set of critic and actor NNs are used to approximate the value functions and equilibrium policies, respectively. The weight update laws for the actor neural networks (NNs) are generated using a gradient-descent method, and the critic NNs are generated by least square regression, which are both based on the modified Bellman error that is independent of the system dynamics. A Lyapunov-based stability analysis shows that uniformly ultimately bounded tracking is achieved, and a convergence analysis demonstrates that the approximate control policies converge to a neighborhood of the optimal solutions. The actor, critic, and identifier structures are implemented in real time continuously and simultaneously. Simulations on two and three player games illustrate the performance of the developed method.

  9. European national healthy city networks: the impact of an elite epistemic community.

    PubMed

    Heritage, Zoë; Green, Geoff

    2013-10-01

    National healthy cities networks (NNs) were created 20 years ago to support the development of healthy cities within the WHO Europe Region. Using the concept of epistemic communities, the evolution and impact of NNs is considered, as is their future development. Healthy cities national networks are providing information, training and support to member cities. In many cases, they are also involved in supporting national public health policy development and disseminating out healthy city principles to other local authorities. National networks are a fragile but an extremely valuable resource for sharing public health knowledge.

  10. Risk Reduction and Training using Simulation Based Tools - 12180

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Irin P.

    2012-07-01

    Process Modeling and Simulation (M and S) has been used for many years in manufacturing and similar domains, as part of an industrial engineer's tool box. Traditionally, however, this technique has been employed in small, isolated projects where models were created from scratch, often making it time and cost prohibitive. Newport News Shipbuilding (NNS) has recognized the value of this predictive technique and what it offers in terms of risk reduction, cost avoidance and on-schedule performance of highly complex work. To facilitate implementation, NNS has been maturing a process and the software to rapidly deploy and reuse M and Smore » based decision support tools in a variety of environments. Some examples of successful applications by NNS of this technique in the nuclear domain are a reactor refueling simulation based tool, a fuel handling facility simulation based tool and a tool for dynamic radiation exposure tracking. The next generation of M and S applications include expanding simulation based tools into immersive and interactive training. The applications discussed here take a tool box approach to creating simulation based decision support tools for maximum utility and return on investment. This approach involves creating a collection of simulation tools that can be used individually or integrated together for a larger application. The refueling simulation integrates with the fuel handling facility simulation to understand every aspect and dependency of the fuel handling evolutions. This approach translates nicely to other complex domains where real system experimentation is not feasible, such as nuclear fuel lifecycle and waste management. Similar concepts can also be applied to different types of simulation techniques. For example, a process simulation of liquid waste operations may be useful to streamline and plan operations, while a chemical model of the liquid waste composition is an important tool for making decisions with respect to waste disposition. Integrating these tools into a larger virtual system provides a tool for making larger strategic decisions. The key to integrating and creating these virtual environments is the software and the process used to build them. Although important steps in the direction of using simulation based tools for nuclear domain, the applications described here represent only a small cross section of possible benefits. The next generation of applications will, likely, focus on situational awareness and adaptive planning. Situational awareness refers to the ability to visualize in real time the state of operations. Some useful tools in this area are Geographic Information Systems (GIS), which help monitor and analyze geographically referenced information. Combined with such situational awareness capability, simulation tools can serve as the platform for adaptive planning tools. These are the tools that allow the decision maker to react to the changing environment in real time by synthesizing massive amounts of data into easily understood information. For the nuclear domains, this may mean creation of Virtual Nuclear Systems, from Virtual Waste Processing Plants to Virtual Nuclear Reactors. (authors)« less

  11. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  12. Construction of diabatic energy surfaces for LiFH with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Guan, Yafu; Fu, Bina; Zhang, Dong H.

    2017-12-01

    A new set of diabatic potential energy surfaces (PESs) for LiFH is constructed with artificial neural networks (NNs). The adiabatic PESs of the ground state and the first excited state are directly fitted with NNs. Meanwhile, the adiabatic-to-diabatic transformation (ADT) angles (mixing angles) are obtained by simultaneously fitting energy difference and interstate coupling gradients. No prior assumptions of the functional form of ADT angles are used before fitting, and the ab initio data including energy difference and interstate coupling gradients are well reproduced. Converged dynamical results show remarkable differences between adiabatic and diabatic PESs, which suggests the significance of non-adiabatic processes.

  13. Performance Evaluation of 14 Neural Network Architectures Used for Predicting Heat Transfer Characteristics of Engine Oils

    NASA Astrophysics Data System (ADS)

    Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.

    2012-01-01

    This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.

  14. Short-Term Effects of Pacifier Texture on NNS in Neurotypical Infants

    PubMed Central

    Oder, Austin L.; Stalling, David L.; Barlow, Steven M.

    2013-01-01

    The dense representation of trigeminal mechanosensitive afferents in the lip vermilion, anterior tongue, intraoral mucosa, and temporomandibular joint allows the infant's orofacial system to encode a wide range of somatosensory experiences during the critical period associated with feed development. Our understanding of how this complex sensorium processes texture is very limited in adults, and the putative role of texture encoding in the infant is unknown. The purpose of this study was to examine the short-term effects of a novel textured pacifier experience in healthy term infants (N = 28). Nonnutritive suck (NNS) compression pressure waveforms were digitized in real time using a variety of custom-molded textured pacifiers varying in spatial array density of touch domes. MANCOVA, adjusted for postmenstrual age at test and sex, revealed that infants exhibited an increase in NNS burst attempts at the expense of a degraded suck burst structure with the textured pacifiers, suggesting that the suck central pattern generator (sCPG) is significantly disrupted and reorganized by this novel orocutaneous experience. The current findings provide new insight into oromotor control as a function of the oral somatosensory environment in neurotypically developing infants. PMID:23737804

  15. Neural-network-based state feedback control of a nonlinear discrete-time system in nonstrict feedback form.

    PubMed

    Jagannathan, Sarangapani; He, Pingan

    2008-12-01

    In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.

  16. Consensus-based distributed cooperative learning from closed-loop neural control systems.

    PubMed

    Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang

    2015-02-01

    In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.

  17. First-row transition metal complexes of ENENES ligands: the ability of the thioether donor to impact the coordination chemistry

    DOE PAGES

    Dub, Pavel A.; Scott, Brian L.; Gordon, John C.

    2015-12-21

    We report the reactions of two variants of ENENES ligands, E(CH 2) 2NH(CH) 2SR, where E = 4-morpholinyl, R = Ph (a), Bn (b) with MCl 2 (M = Mn, Fe, Co, Ni and Cu) in coordinating solvents (MeCN, EtOH) affords isolable complexes, whose magnetic susceptibility measurements suggest paramagnetism and a high-spin formulation. X-Ray diffraction studies of available crystals show that the ligand coordinates to the metal in either a bidentate κ 2[N,N'] or tridentate κ 3[N,N',S] fashion, depending on the nature of ligand and/or identity of the metal atom. In the case of a less basic SPh moiety, amore » bidentate coordination mode was identified for harder metals (Mn, Fe), whereas a tridentate coordination mode was identified in the case of a more basic SBn moiety with softer metals (Ni, Cu). In the intermediate case of Co, ligands a and b coordinate via κ 2[N,N'] and κ 3[N,N',S] coordination modes, which can be conveniently predicted by DFT calculations. Finally, for the softest metal (Cu), ligand a coordinates in a κ 3[N,N',S] fashion.« less

  18. Anticancer effects of Bilberry anthocyanins compared with NutraNanoSphere encapsulated Bilberry anthocyanins.

    PubMed

    Thibado, Seth P; Thornthwaite, Jerry T; Ballard, Thomas K; Goodman, Brandon T

    2018-02-01

    Rapidly accumulating laboratory and clinical research evidence indicates that anthocyanins exhibit anticancer activity and the evaluation of bilberry anthocyanins as chemo-preventive agents is progressing. It has previously been demonstrated that anthocyanins upregulate tumor suppressor genes, induce apoptosis in cancer cells, repair and protect genomic DNA integrity, which is important in reducing age-associated oxidative stress, and improve neuronal and cognitive brain function. Bilberry anthocyanins have pronounced health effects, even though they have a low bioavailability. To increase the bioavailability, Bilberry was encapsulated in 5.5 nm diameter liposomal micelles, called NutraNanoSpheres (NNS), at a concentration of 2.5 mg/50 µl [25% (w/w) anthocyanins]. These Bilberry NNS were used to study the apoptotic/cytotoxic effects on K562 Human Erythroleukemic cancer cells. Flow cytometric fluorescent quantification of the uptake of propidium iodide in a special cell viability formulation into dead K562 cells was used to determine the effects of Bilberry on the viability of K562 cells. The concentrations of Bilberry that demonstrated the greatest levels of percentage inhibition, relative to the control populations, were biphasic, revealing a 60-70% inhibition between 0.018-1.14 mg/ml (n=6) and 60% inhibition at 4 mg/ml. The lowest percentage inhibition (30%) occurred at 2 mg/ml. The lethal dose 50 was determined to be 0.01-0.04 mg/ml of Bilberry per 105 K562 cells at 72 h of cell culture exposure. At 48 h incubation, the highest percentage of inhibition was only 27%, suggesting involvement of a long-term apoptotic event. These levels, which demonstrated direct cytotoxic effects, were 8-40 times lower than levels required for Bilberry that is not encapsulated. The increase in bioavailability with the Bilberry NNS and its water solubility demonstrated the feasibility of using Bilberry NNS in cancer patient clinical trials.

  19. Anticancer effects of Bilberry anthocyanins compared with NutraNanoSphere encapsulated Bilberry anthocyanins

    PubMed Central

    Thibado, Seth P.; Thornthwaite, Jerry T.; Ballard, Thomas K.; Goodman, Brandon T.

    2018-01-01

    Rapidly accumulating laboratory and clinical research evidence indicates that anthocyanins exhibit anticancer activity and the evaluation of bilberry anthocyanins as chemo-preventive agents is progressing. It has previously been demonstrated that anthocyanins upregulate tumor suppressor genes, induce apoptosis in cancer cells, repair and protect genomic DNA integrity, which is important in reducing age-associated oxidative stress, and improve neuronal and cognitive brain function. Bilberry anthocyanins have pronounced health effects, even though they have a low bioavailability. To increase the bioavailability, Bilberry was encapsulated in 5.5 nm diameter liposomal micelles, called NutraNanoSpheres (NNS), at a concentration of 2.5 mg/50 µl [25% (w/w) anthocyanins]. These Bilberry NNS were used to study the apoptotic/cytotoxic effects on K562 Human Erythroleukemic cancer cells. Flow cytometric fluorescent quantification of the uptake of propidium iodide in a special cell viability formulation into dead K562 cells was used to determine the effects of Bilberry on the viability of K562 cells. The concentrations of Bilberry that demonstrated the greatest levels of percentage inhibition, relative to the control populations, were biphasic, revealing a 60–70% inhibition between 0.018–1.14 mg/ml (n=6) and 60% inhibition at 4 mg/ml. The lowest percentage inhibition (30%) occurred at 2 mg/ml. The lethal dose 50 was determined to be 0.01–0.04 mg/ml of Bilberry per 105 K562 cells at 72 h of cell culture exposure. At 48 h incubation, the highest percentage of inhibition was only 27%, suggesting involvement of a long-term apoptotic event. These levels, which demonstrated direct cytotoxic effects, were 8–40 times lower than levels required for Bilberry that is not encapsulated. The increase in bioavailability with the Bilberry NNS and its water solubility demonstrated the feasibility of using Bilberry NNS in cancer patient clinical trials. PMID:29399357

  20. Handling limited datasets with neural networks in medical applications: A small-data approach.

    PubMed

    Shaikhina, Torgyn; Khovanova, Natalia A

    2017-01-01

    Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Chromosomal abnormalities in azoospermic and non-azoospermic infertile men: numbers needed to be screened to prevent adverse pregnancy outcomes.

    PubMed

    Dul, E C; van Echten-Arends, J; Groen, H; Dijkhuizen, T; Land, J A; van Ravenswaaij-Arts, C M A

    2012-09-01

    How many infertile men who wish to conceive need to be screened for chromosomal abnormalities to prevent one miscarriage or the birth of one child with congenital anomalies (CAs)? In azoospermic men, the prevalence of chromosomal abnormalities is 15.2% and the number needed to be screened (NNS; minimum-maximum estimate) for a miscarriage is 80-88 and for a child with CAs is 790-3951. The prevalence of chromosomal abnormalities in non-azoospermic men is 2.3% and the NNS are 315-347 and 2543-12 723, respectively. Guidelines advise the screening of infertile men for chromosomal abnormalities to prevent miscarriages and children with congenital abnormalities, but no studies have been published on the effectiveness of this screening strategy. Retrospective cohort study of 1223 infertile men between 1994 and 2007. Men with azoospermia and men eligible for ICSI treatment visiting a university hospital fertility clinic in The Netherlands who underwent chromosomal analysis between 1994 and 2007 were identified retrospectively in a registry. Only cases of which at least one sperm analysis was available were included. Data were collected by chart review, with a follow-up of pregnancies and their outcomes until 2010. The chromosomal abnormalities were categorized according to their risk of unbalanced offspring, i.e. miscarriage and/or child with CAs. Multi-level analysis was used to estimate the impact of chromosomal abnormalities on the outcome of pregnancies in the different subgroups of our cohort. NNS for miscarriages and children with CAs were calculated based on data from our cohort and data published in the literature. A chromosomal abnormality was found in 12 of 79 men with azoospermia (15.2%) and in 26 of 1144 non-azoospermic men (2.3%). The chromosomal abnormalities were categorized based on the literature, into abnormalities with and abnormalities without increased risk for miscarriage and/or child with CAs. In our study group, there was no statistically significant difference between the subgroups with and without increased risk respectively, regarding the frequency of children born with CAs (1/20; 5.0% versus 1/14; 7.1%), miscarriage (9/20; 45.0% versus 2/14; 14.3%) or unaffected liveborn children (9/20; 45.0% versus 9/14; 64.3%). The prevalence of chromosomal abnormalities with a theoretically increased risk of unbalanced progeny was 1.0% in non-azoospermic men and 3.8% in men with azoospermia. For the calculation of the NNS, the risk of an adverse pregnancy outcome in our cohort was compared with the incidence ranges of miscarriage and children with CAs in the general population. The number of azoospermic men that needs to be screened to prevent one miscarriage (80-88) or one child with CAs (790-3951) was considerably lower compared with the NNS in the non-azoospermic group (315-347 and 2543-12 723, respectively). The prevalence of chromosomal abnormalities in infertile men is low, and although we included 1223 men, our conclusions are based on a small number (38) of abnormal karyotypes. As there are no large series on outcomes of pregnancies in infertile men with chromosomal abnormalities, our conclusions had to be partly based on assumptions derived from the literature. Based on the NNS calculated in our study, screening for chromosomal abnormalities is recommended in all azoospermic men. In non-azoospermic infertile men, screening might be limited to men with an additional risk factor (e.g. a history of recurrent miscarriage or a positive family history for recurrent miscarriage or children with CAs). The NNS can be used in future cost-effectiveness studies and the evaluation of current guidelines on karyotyping infertile men.

  2. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  3. Corrected Position Estimation in PET Detector Modules With Multi-Anode PMTs Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aliaga, R. J.; Martinez, J. D.; Gadea, R.; Sebastia, A.; Benlloch, J. M.; Sanchez, F.; Pavon, N.; Lerche, Ch.

    2006-06-01

    This paper studies the use of Neural Networks (NNs) for estimating the position of impinging photons in gamma ray detector modules for PET cameras based on continuous scintillators and Multi-Anode Photomultiplier Tubes (MA-PMTs). The detector under study is composed of a 49/spl times/49/spl times/10 mm/sup 3/ continuous slab of LSO coupled to a flat panel H8500 MA-PMT. Four digitized signals from a charge division circuit, which collects currents from the 8/spl times/8 anode matrix of the photomultiplier, are used as inputs to the NN, thus reducing drastically the number of electronic channels required. We have simulated the computation of the position for 511 keV gamma photons impacting perpendicularly to the detector surface. Thus, we have performed a thorough analysis of the NN architecture and training procedures in order to achieve the best results in terms of spatial resolution and bias correction. Results obtained using GEANT4 simulation toolkit show a resolution of 1.3 mm/1.9 mm FWHM at the center/edge of the detector and less than 1 mm of systematic error in the position near the edges of the scintillator. The results confirm that NNs can partially model and correct the non-uniform detector response using only the position-weighted signals from a simple 2D DPC circuit. Linearity degradation for oblique incidence is also investigated. Finally, the NN can be implemented in hardware for parallel real time corrected Line-of-Response (LOR) estimation. Results on resources occupancy and throughput in FPGA are presented.

  4. Early neurophysiological indices of second language morphosyntax learning

    PubMed Central

    Hanna, Jeff; Shtyrov, Yury; Williams, John; Pulvermüller, Friedemann

    2016-01-01

    Humans show variable degrees of success in acquiring a second language (L2). In many cases, morphological and syntactic knowledge remain deficient, although some learners succeed in reaching nativelike levels, even if they begin acquiring their L2 relatively late. In this study, we use psycholinguistic, online language proficiency tests and a neurophysiological index of syntactic processing, the syntactic mismatch negativity (sMMN) to local agreement violations, to compare behavioural and neurophysiological markers of grammar processing between native speakers (NS) of English and non-native speakers (NNS). Variable grammar proficiency was measured by psycholinguistic tests. When NS heard ungrammatical word sequences lacking agreement between subject and verb (e.g. *we kicks), the MMN was enhanced compared with syntactically legal sentences (e.g. he kicks). More proficient NNS also showed this difference, but less proficient NNS did not. The main cortical sources of the MMN responses were localised in bilateral superior temporal areas, where, crucially, source strength of grammar-related neuronal activity correlated significantly with grammatical proficiency of individual L2 speakers as revealed by the psycholinguistic tests. As our results show similar, early MMN indices to morpho-syntactic agreement violations among both native speakers and non-native speakers with high grammar proficiency, they appear consistent with the use of similar brain mechanisms for at least certain aspects of L1 and L2 grammars. PMID:26752451

  5. Low cost fabrication development for oxide dispersion strengthened alloy vanes

    NASA Technical Reports Server (NTRS)

    Perkins, R. J.; Bailey, P. G.

    1978-01-01

    Viable processes were developed for secondary working of oxide dispersion strengthened (ODS) alloys to near-net shapes (NNS) for aircraft turbine vanes. These processes were shown capable of producing required microstructure and properties for vane applications. Material cost savings of 40 to 50% are projected for the NNS process over the current procedures which involve machining from rectangular bar. Additional machining cost savings are projected. Of three secondary working processes evaluated, directional forging and plate bending were determined to be viable NNS processes for ODS vanes. Directional forging was deemed most applicable to high pressure turbine (HPT) vanes with their large thickness variations while plate bending was determined to be most cost effective for low pressure turbine (LPT) vanes because of their limited thickness variations. Since the F101 LPT vane was selected for study in this program, development of plate bending was carried through to establishment of a preliminary process. Preparation of ODS alloy plate for bending was found to be a straight forward process using currently available bar stock, providing that the capability for reheating between roll passes is available. Advanced ODS-NiCrAl and ODS-FeCrAl alloys were utilized on this program. Workability of all alloys was adequate for directional forging and plate bending, but only the ODS-FeCrAl had adequate workability for shaped preform extrustion.

  6. Early neurophysiological indices of second language morphosyntax learning.

    PubMed

    Hanna, Jeff; Shtyrov, Yury; Williams, John; Pulvermüller, Friedemann

    2016-02-01

    Humans show variable degrees of success in acquiring a second language (L2). In many cases, morphological and syntactic knowledge remain deficient, although some learners succeed in reaching nativelike levels, even if they begin acquiring their L2 relatively late. In this study, we use psycholinguistic, online language proficiency tests and a neurophysiological index of syntactic processing, the syntactic mismatch negativity (sMMN) to local agreement violations, to compare behavioural and neurophysiological markers of grammar processing between native speakers (NS) of English and non-native speakers (NNS). Variable grammar proficiency was measured by psycholinguistic tests. When NS heard ungrammatical word sequences lacking agreement between subject and verb (e.g. *we kicks), the MMN was enhanced compared with syntactically legal sentences (e.g. he kicks). More proficient NNS also showed this difference, but less proficient NNS did not. The main cortical sources of the MMN responses were localised in bilateral superior temporal areas, where, crucially, source strength of grammar-related neuronal activity correlated significantly with grammatical proficiency of individual L2 speakers as revealed by the psycholinguistic tests. As our results show similar, early MMN indices to morpho-syntactic agreement violations among both native speakers and non-native speakers with high grammar proficiency, they appear consistent with the use of similar brain mechanisms for at least certain aspects of L1 and L2 grammars. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    NASA Astrophysics Data System (ADS)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  8. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  9. Interactive learning in 2×2 normal form games by neural network agents

    NASA Astrophysics Data System (ADS)

    Spiliopoulos, Leonidas

    2012-11-01

    This paper models the learning process of populations of randomly rematched tabula rasa neural network (NN) agents playing randomly generated 2×2 normal form games of all strategic classes. This approach has greater external validity than the existing models in the literature, each of which is usually applicable to narrow subsets of classes of games (often a single game) and/or to fixed matching protocols. The learning prowess of NNs with hidden layers was impressive as they learned to play unique pure strategy equilibria with near certainty, adhered to principles of dominance and iterated dominance, and exhibited a preference for risk-dominant equilibria. In contrast, perceptron NNs were found to perform significantly worse than hidden layer NN agents and human subjects in experimental studies.

  10. Association between breastfeeding duration, non-nutritive sucking habits and dental arch dimensions in deciduous dentition: a cross-sectional study.

    PubMed

    Agarwal, Shiv Shankar; Nehra, Karan; Sharma, Mohit; Jayan, Balakrishna; Poonia, Anish; Bhattal, Hiteshwar

    2014-10-31

    This cross-sectional retrospective study was conducted to determine association between breastfeeding duration, non-nutritive sucking habits, dental arch transverse diameters, posterior crossbite and anterior open bite in deciduous dentition. 415 children (228 males and 187 females), 4 to 6 years old, from a mixed Indian population were clinically examined. Based on written questionnaire answered by parents, children were divided into two groups: group 1 (breastfed for <6 months (n = 158)) and group 2 (breastfed for ≥6 months (n = 257)). The associations were analysed using chi-square test (P < 0.05 taken as statistically significant). Odds ratio (OR) was calculated to determine the strength of associations tested. Multivariate logistic regression analysis was done for obtaining independent predictors of posterior crossbite and maxillary and mandibular IMD (Inter-molar distance) and ICD (Inter-canine distance). Non-nutritive sucking (NNS) was present in 15.18% children (20.3% in group 1 as compared to 12.1% in group 2 (P = 0.024)). The average ICD and IMD in maxilla and average IMD in mandible were significantly higher among group 2 as compared to group 1 (P < 0.01). In mandible, average ICD did not differ significantly between the two groups (P = 0.342). The distribution of anterior open bite did not differ significantly between the two groups (P = 0.865). The distribution of posterior crossbite was significantly different between the two groups (P = 0.001). OR assessment (OR = 1.852) revealed that group 1 had almost twofold higher prevalence of NNS habits than group 2. Multivariate logistic regression analysis revealed that the first group had independently fourfold increased risk of developing crossbite compared to the second group (OR = 4.3). Multivariate linear regression analysis also revealed that age and breastfeeding duration were the most significant determinants of ICD and IMD. An increased prevalence of NNS in the first group suggests that NNS is a dominant variable in the association between breastfeeding duration and reduced intra-arch transverse diameters which leads to increased prevalence of posterior crossbites as seen in our study. Mandibular inter-canine width is however unaffected due to a lowered tongue posture seen in these children.

  11. A comparative DFT study on CO oxidation reaction over Si-doped BC2N nanosheet and nanotube

    NASA Astrophysics Data System (ADS)

    Nematollahi, Parisa; Neyts, Erik C.

    2018-05-01

    In this study, we performed density functional theory (DFT) calculations to investigate different reaction mechanisms of CO oxidation catalyzed by the Si atom embedded defective BC2N nanostructures as well as the analysis of the structural and electronic properties. The structures of all the complexes are optimized and characterized by frequency calculations at the M062X/6-31G∗ computational level. Also, The electronic structures and thermodynamic parameters of adsorbed CO and O2 molecules over Si-doped BC2N nanostructures are examined in detail. Moreover, to investigate the curvature effect on the CO oxidation reaction, all the adsorption and CO oxidation reactions on a finite-sized armchair (6,6) Si-BC2NNT are also studied. Our results indicate that there can be two possible pathways for the CO oxidation with O2 molecule: O2(g) + CO(g) → O2(ads) + CO(ads) → CO2(g) + O(ads) and O(ads) + CO(g) → CO2(g). The first reaction proceeds via the Langmuir-Hinshelwood (LH) mechanism while the second goes through the Eley-Rideal (ER) mechanism. On the other hand, by increasing the tube diameter, the energy barrier increases due to the strong adsorption energy of the O2 molecule which is related to its dissociation over the tube surface. Our calculations indicate that the two step energy barrier of the oxidation reaction over Si-BC2NNS is less than that over the Si-BC2NNT. Hence, Si-BC2NNS may serve as an efficient and highly activated substrate to CO oxidation rather than (4,4) Si-BC2NNT.

  12. Observer-Based Adaptive Neural Network Control for Nonlinear Systems in Nonstrict-Feedback Form.

    PubMed

    Chen, Bing; Zhang, Huaguang; Lin, Chong

    2016-01-01

    This paper focuses on the problem of adaptive neural network (NN) control for a class of nonlinear nonstrict-feedback systems via output feedback. A novel adaptive NN backstepping output-feedback control approach is first proposed for nonlinear nonstrict-feedback systems. The monotonicity of system bounding functions and the structure character of radial basis function (RBF) NNs are used to overcome the difficulties that arise from nonstrict-feedback structure. A state observer is constructed to estimate the immeasurable state variables. By combining adaptive backstepping technique with approximation capability of radial basis function NNs, an output-feedback adaptive NN controller is designed through backstepping approach. It is shown that the proposed controller guarantees semiglobal boundedness of all the signals in the closed-loop systems. Two examples are used to illustrate the effectiveness of the proposed approach.

  13. Improving preterm infant outcomes: implementing an evidence-based oral feeding advancement protocol in the neonatal intensive care unit.

    PubMed

    Kish, Mary Z

    2014-10-01

    The ability of a preterm infant to exclusively oral feed is a necessary standard for discharge readiness from the neonatal intensive care unit (NICU). Many of the interventions related to oral feeding advancement currently employed for preterm infants in the NICU are based on individual nursing observations and judgment. Studies involving standardized feeding protocols for oral feeding advancement have been shown to decrease variability in feeding practices, facilitate shortened transition times from gavage to oral feedings, improve bottle feeding performance, and significantly decrease the length of stay (LOS) in the NICU. This project critically evaluated the implementation of an oral feeding advancement protocol in a 74-bed level III NICU in an attempt to standardize the process of advancing oral feedings in medically stable preterm infants. A comprehensive review of the literature identified key features for successful oral feeding in preterm infants. Strong levels of evidence suggested an association between both nonnutritive sucking (NNS) opportunities and standardized feeding advancement protocols with successful oral feeding in preterm infants. These findings prompted a pilot practice change using a feeding advancement protocol and consisted of NNS and standardized oral feeding advancement opportunities. Time to exclusive oral feedings and LOS were compared pre- and postprotocol implementation during more than a 2-month evaluation period. Infants using NNS and the standardized oral feeding advancement protocol had an observed reduction in time to exclusive oral feedings and LOS, although statistical significance was not achieved.

  14. Quantitative workflow based on NN for weighting criteria in landfill suitability mapping

    NASA Astrophysics Data System (ADS)

    Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Alkhasawneh, Mutasem Sh.; Aziz, Hamidi Abdul

    2017-10-01

    Our study aims to introduce a new quantitative workflow that integrates neural networks (NNs) and multi criteria decision analysis (MCDA). Existing MCDA workflows reveal a number of drawbacks, because of the reliance on human knowledge in the weighting stage. Thus, new workflow presented to form suitability maps at the regional scale for solid waste planning based on NNs. A feed-forward neural network employed in the workflow. A total of 34 criteria were pre-processed to establish the input dataset for NN modelling. The final learned network used to acquire the weights of the criteria. Accuracies of 95.2% and 93.2% achieved for the training dataset and testing dataset, respectively. The workflow was found to be capable of reducing human interference to generate highly reliable maps. The proposed workflow reveals the applicability of NN in generating landfill suitability maps and the feasibility of integrating them with existing MCDA workflows.

  15. Multistability of second-order competitive neural networks with nondecreasing saturated activation functions.

    PubMed

    Nie, Xiaobing; Cao, Jinde

    2011-11-01

    In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.

  16. Identification of a broad-spectrum inhibitor of virus RNA synthesis: validation of a prototype virus-based approach

    PubMed Central

    Filone, Claire Marie; Hodges, Erin N.; Honeyman, Brian; Bushkin, G. Guy; Boyd, Karla; Platt, Andrew; Ni, Feng; Strom, Kyle; Hensley, Lisa; Snyder, John K.; Connor, John H.

    2013-01-01

    There are no approved therapeutics for the most deadly nonsegmented negative-strand (NNS) RNA viruses, including Ebola (EBOV). To identify new chemical scaffolds for development of broad-spectrum antivirals, we undertook a prototype-based lead identification screen. Using the prototype NNS virus, vesicular stomatitis virus (VSV), multiple inhibitory compounds were identified. Three compounds were investigated for broad-spectrum activity, and inhibited EBOV infection. The most potent, CMLDBU3402, was selected for further study. CMLDBU3402 did not show significant activity against segmented negative-strand RNA viruses suggesting proscribed broad-spectrum activity. Mechanistic analysis indicated that CMLDBU3402 blocked VSV viral RNA synthesis and inhibited EBOV RNA transcription, demonstrating a consistent mechanism of action against genetically distinct viruses. The identification of this chemical backbone as a broad-spectrum inhibitor of viral RNA synthesis offers significant potential for the development of new therapies for highly pathogenic viruses. PMID:23521799

  17. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  18. Supporting English-medium pedagogy through an online corpus of science and engineering lectures

    NASA Astrophysics Data System (ADS)

    Kunioshi, Nílson; Noguchi, Judy; Tojo, Kazuko; Hayashi, Hiroko

    2016-05-01

    As English-medium instruction (EMI) spreads around the world, university teachers and students who are non-native speakers of English (NNS) need to put much effort into the delivery or reception of content. Construction of scientific meaning in the process of learning is already complex when instruction is delivered in the first language of the teachers and students, and may become even more challenging in a second language, because science education depends greatly on language. In order to identify important pedagogical functions that teachers use to deliver content and to present different ways to realise each function, a corpus of lectures related to science and engineering courses was created and analysed. NNS teachers and students in science and engineering involved in EMI higher education can obtain insights for delivering and listening to lectures from the Online Corpus of Academic Lectures (OnCAL).

  19. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  20. Estimating the cost of skin cancer detection by dermatology providers in a large health care system.

    PubMed

    Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K

    2018-04-01

    Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Reinforcement-learning-based output-feedback control of nonstrict nonlinear discrete-time systems with application to engine emission control.

    PubMed

    Shih, Peter; Kaul, Brian C; Jagannathan, Sarangapani; Drallmeier, James A

    2009-10-01

    A novel reinforcement-learning-based output adaptive neural network (NN) controller, which is also referred to as the adaptive-critic NN controller, is developed to deliver the desired tracking performance for a class of nonlinear discrete-time systems expressed in nonstrict feedback form in the presence of bounded and unknown disturbances. The adaptive-critic NN controller consists of an observer, a critic, and two action NNs. The observer estimates the states and output, and the two action NNs provide virtual and actual control inputs to the nonlinear discrete-time system. The critic approximates a certain strategic utility function, and the action NNs minimize the strategic utility function and control inputs. All NN weights adapt online toward minimization of a performance index, utilizing the gradient-descent-based rule, in contrast with iteration-based adaptive-critic schemes. Lyapunov functions are used to show the stability of the closed-loop tracking error, weights, and observer estimates. Separation and certainty equivalence principles, persistency of excitation condition, and linearity in the unknown parameter assumption are not needed. Experimental results on a spark ignition (SI) engine operating lean at an equivalence ratio of 0.75 show a significant (25%) reduction in cyclic dispersion in heat release with control, while the average fuel input changes by less than 1% compared with the uncontrolled case. Consequently, oxides of nitrogen (NO(x)) drop by 30%, and unburned hydrocarbons drop by 16% with control. Overall, NO(x)'s are reduced by over 80% compared with stoichiometric levels.

  2. Neural network based adaptive control for nonlinear dynamic regimes

    NASA Astrophysics Data System (ADS)

    Shin, Yoonghyun

    Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named 'composite model reference adaptive control' is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of 'pseudo-control hedging' techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.

  3. Template based protein structure modeling by global optimization in CASP11.

    PubMed

    Joo, Keehyoung; Joung, InSuk; Lee, Sun Young; Kim, Jong Yun; Cheng, Qianyi; Manavalan, Balachandran; Joung, Jong Young; Heo, Seungryong; Lee, Juyong; Nam, Mikyung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    For the template-based modeling (TBM) of CASP11 targets, we have developed three new protein modeling protocols (nns for server prediction and LEE and LEER for human prediction) by improving upon our previous CASP protocols (CASP7 through CASP10). We applied the powerful global optimization method of conformational space annealing to three stages of optimization, including multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain remodeling. For more successful fold recognition, a new alignment method called CRFalign was developed. It can incorporate sensitive positional and environmental dependence in alignment scores as well as strong nonlinear correlations among various features. Modifications and adjustments were made to the form of the energy function and weight parameters pertaining to the chain building procedure. For the side-chain remodeling step, residue-type dependence was introduced to the cutoff value that determines the entry of a rotamer to the side-chain modeling library. The improved performance of the nns server method is attributed to successful fold recognition achieved by combining several methods including CRFalign and to the current modeling formulation that can incorporate native-like structural aspects present in multiple templates. The LEE protocol is identical to the nns one except that CASP11-released server models are used as templates. The success of LEE in utilizing CASP11 server models indicates that proper template screening and template clustering assisted by appropriate cluster ranking promises a new direction to enhance protein 3D modeling. Proteins 2016; 84(Suppl 1):221-232. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Determination of the Interaction Position of Gamma Photons in Monolithic Scintillators Using Neural Network Fitting

    NASA Astrophysics Data System (ADS)

    Conde, P.; Iborra, A.; González, A. J.; Hernández, L.; Bellido, P.; Moliner, L.; Rigla, J. P.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Seimetz, M.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2016-02-01

    In Positron Emission Tomography (PET) detectors based on monolithic scintillators, the photon interaction position needs to be estimated from the light distribution (LD) on the photodetector pixels. Due to the finite size of the scintillator volume, the symmetry of the LD is truncated everywhere except for the crystal center. This effect produces a poor estimation of the interaction positions towards the edges, an especially critical situation when linear algorithms, such as Center of Gravity (CoG), are used. When all the crystal faces are painted black, except the one in contact with the photodetector, the LD can be assumed to behave as the inverse square law, providing a simple theoretical model. Using this LD model, the interaction coordinates can be determined by means of fitting each event to a theoretical distribution. In that sense, the use of neural networks (NNs) has been shown to be an effective alternative to more traditional fitting techniques as nonlinear least squares (LS). The multilayer perceptron is one type of NN which can model non-linear functions well and can be trained to accurately generalize when presented with new data. In this work we have shown the capability of NNs to approximate the LD and provide the interaction coordinates of γ-photons with two different photodetector setups. One experimental setup was based on analog Silicon Photomultipliers (SiPMs) and a charge division diode network, whereas the second setup was based on digital SiPMs (dSiPMs). In both experiments NNs minimized border effects. Average spatial resolutions of 1.9 ±0.2 mm and 1.7 ±0.2 mm for the entire crystal surface were obtained for the analog and dSiPMs approaches, respectively.

  5. Sucralose Affects Glycemic and Hormonal Responses to an Oral Glucose Load

    PubMed Central

    Pepino, M. Yanina; Tiemann, Courtney D.; Patterson, Bruce W.; Wice, Burton M.; Klein, Samuel

    2013-01-01

    OBJECTIVE Nonnutritive sweeteners (NNS), such as sucralose, have been reported to have metabolic effects in animal models. However, the relevance of these findings to human subjects is not clear. We evaluated the acute effects of sucralose ingestion on the metabolic response to an oral glucose load in obese subjects. RESEARCH DESIGN AND METHODS Seventeen obese subjects (BMI 42.3 ± 1.6 kg/m2) who did not use NNS and were insulin sensitive (based on a homeostasis model assessment of insulin resistance score ≤2.6) underwent a 5-h modified oral glucose tolerance test on two separate occasions preceded by consuming either sucralose (experimental condition) or water (control condition) 10 min before the glucose load in a randomized crossover design. Indices of β-cell function, insulin sensitivity (SI), and insulin clearance rates were estimated by using minimal models of glucose, insulin, and C-peptide kinetics. RESULTS Compared with the control condition, sucralose ingestion caused 1) a greater incremental increase in peak plasma glucose concentrations (4.2 ± 0.2 vs. 4.8 ± 0.3 mmol/L; P = 0.03), 2) a 20 ± 8% greater incremental increase in insulin area under the curve (AUC) (P < 0.03), 3) a 22 ± 7% greater peak insulin secretion rate (P < 0.02), 4) a 7 ± 4% decrease in insulin clearance (P = 0.04), and 5) a 23 ± 20% decrease in SI (P = 0.01). There were no significant differences between conditions in active glucagon-like peptide 1, glucose-dependent insulinotropic polypeptide, glucagon incremental AUC, or indices of the sensitivity of the β-cell response to glucose. CONCLUSIONS These data demonstrate that sucralose affects the glycemic and insulin responses to an oral glucose load in obese people who do not normally consume NNS. PMID:23633524

  6. The reliability of in-hospital diagnoses of diabetes mellitus in the setting of an acute myocardial infarction.

    PubMed

    Arnold, Suzanne V; Lipska, Kasia J; Inzucchi, Silvio E; Li, Yan; Jones, Philip G; McGuire, Darren K; Goyal, Abhinav; Stolker, Joshua M; Lind, Marcus; Spertus, John A; Kosiborod, Mikhail

    2014-01-01

    Incident diabetes mellitus (DM) is important to recognize in patients with acute myocardial infarction (AMI). To develop an efficient screening strategy, we explored the use of random plasma glucose (RPG) at admission and fasting plasma glucose (FPG) to select patients with AMI for glycosylated hemoglobin (HbA1c) testing. Prospective registry of 1574 patients with AMI not taking glucose-lowering medication from 24 US hospitals. All patients had HbA1c measured at a core laboratory and admission RPG and ≥2 FPGs recorded during hospitalization. We examined potential combinations of RPG and FPG and compared these with HbA1c≥6.5%-considered the gold standard for DM diagnosis in these analyses. An RPG>140 mg/dL or FPG≥126 mg/dL had high sensitivity for DM diagnosis. Combining these into a screening protocol (if admission RPG>140, check HbA1c; or if FPG≥126 on a subsequent day, check HbA1c) led to HbA1c testing in 50% of patients and identified 86% with incident DM (number needed to screen (NNS)=3.3 to identify 1 case of DM; vs NNS=5.6 with universal HbA1c screening). Alternatively, using an RPG>180 led to HbA1c testing in 40% of patients with AMI and identified 82% of DM (NNS=2.7). We have established two potential selective screening methods for DM in the setting of AMI that could identify the vast majority of incident DM by targeted screening of 40-50% of patients with AMI with HbA1c testing. Using these methods may efficiently identify patients with AMI with DM so that appropriate education and treatment can be promptly initiated.

  7. Effect of Different Phases of Menstrual Cycle on Heart Rate Variability (HRV).

    PubMed

    Brar, Tejinder Kaur; Singh, K D; Kumar, Avnish

    2015-10-01

    Heart Rate Variability (HRV), which is a measure of the cardiac autonomic tone, displays physiological changes throughout the menstrual cycle. The functions of the ANS in various phases of the menstrual cycle were examined in some studies. The aim of our study was to observe the effect of menstrual cycle on cardiac autonomic function parameters in healthy females. A cross-sectional (observational) study was conducted on 50 healthy females, in the age group of 18-25 years. Heart Rate Variability (HRV) was recorded by Physio Pac (PC-2004). The data consisted of Time Domain Analysis and Frequency Domain Analysis in menstrual, proliferative and secretory phase of menstrual cycle. Data collected was analysed statistically using student's pair t-test. The difference in mean heart rate, LF power%, LFnu and HFnu in menstrual and proliferative phase was found to be statistically significant. The difference in mean RR, Mean HR, RMSSD (the square root of the mean of the squares of the successive differences between adjacent NNs.), NN50 (the number of pairs of successive NNs that differ by more than 50 ms), pNN50 (the proportion of NN50 divided by total number of NNs.), VLF (very low frequency) power, LF (low frequency) power, LF power%, HF power %, LF/HF ratio, LFnu and HFnu was found to be statistically significant in proliferative and secretory phase. The difference in Mean RR, Mean HR, LFnu and HFnu was found to be statistically significant in secretory and menstrual phases. From the study it can be concluded that sympathetic nervous activity in secretory phase is greater than in the proliferative phase, whereas parasympathetic nervous activity is predominant in proliferative phase.

  8. Effect of Different Phases of Menstrual Cycle on Heart Rate Variability (HRV)

    PubMed Central

    Singh, K. D.; Kumar, Avnish

    2015-01-01

    Background Heart Rate Variability (HRV), which is a measure of the cardiac autonomic tone, displays physiological changes throughout the menstrual cycle. The functions of the ANS in various phases of the menstrual cycle were examined in some studies. Aims and Objectives The aim of our study was to observe the effect of menstrual cycle on cardiac autonomic function parameters in healthy females. Materials and Methods A cross-sectional (observational) study was conducted on 50 healthy females, in the age group of 18-25 years. Heart Rate Variability (HRV) was recorded by Physio Pac (PC-2004). The data consisted of Time Domain Analysis and Frequency Domain Analysis in menstrual, proliferative and secretory phase of menstrual cycle. Data collected was analysed statistically using student’s pair t-test. Results The difference in mean heart rate, LF power%, LFnu and HFnu in menstrual and proliferative phase was found to be statistically significant. The difference in mean RR, Mean HR, RMSSD (the square root of the mean of the squares of the successive differences between adjacent NNs.), NN50 (the number of pairs of successive NNs that differ by more than 50 ms), pNN50 (the proportion of NN50 divided by total number of NNs.), VLF (very low frequency) power, LF (low frequency) power, LF power%, HF power %, LF/HF ratio, LFnu and HFnu was found to be statistically significant in proliferative and secretory phase. The difference in Mean RR, Mean HR, LFnu and HFnu was found to be statistically significant in secretory and menstrual phases. Conclusion From the study it can be concluded that sympathetic nervous activity in secretory phase is greater than in the proliferative phase, whereas parasympathetic nervous activity is predominant in proliferative phase. PMID:26557512

  9. Conclusions from the Mexican National Nutrition Survey 1999: translating results into nutrition policy.

    PubMed

    Rivera, Juan A; Sepúlveda Amor, Jaime

    2003-01-01

    This article presents and overview of the main results and conclusions from the Mexican National Nutrition Survey 1999 (NNS-1999) and the principal nutrition policy implications of the findings. The NNS-1999 was conducted on a national probabilistic sample of almost 18,000 households, representative of the national, regional, as well as urban and rural levels in Mexico. Subjects included were children < 12 years and women 12-49 years. Anthropometry, blood specimens, diet and socioeconomic information of the family were collected. The principal public nutrition problems are stunting in children < 5 years of age; anemia, iron and zinc deficiency, and low serum vitamin C concentrations at all ages; and vitamin A deficiency in children. Undernutrition (stunting and micronutrient deficiencies) was generally more prevalent in the lower socioeconomic groups, in rural areas, in the south and in Indigenous population. Overweight and obesity are serious public health problems in women and are already a concern in school-age children. A number of programs aimed at preventing undernutrition are currently in progress; several of them were designed or modified as a result of the NNS-1999 findings. Most of them have an evaluation component that will inform adjustments or modifications of their design and implementation. However, little is being done for the prevention and control of overweight and obesity and there is limited experience on effective interventions. The design and evaluation of prevention strategies for controlling obesity in the population, based on existing evidence, is urgently needed and success stories should be brought to scale quickly to maximize impact. The English version of this paper is available too at: http://www.insp.mx/salud/index.html.

  10. Comparison of Multiple Linear Regressions and Neural Networks based QSAR models for the design of new antitubercular compounds.

    PubMed

    Ventura, Cristina; Latino, Diogo A R S; Martins, Filomena

    2013-01-01

    The performance of two QSAR methodologies, namely Multiple Linear Regressions (MLR) and Neural Networks (NN), towards the modeling and prediction of antitubercular activity was evaluated and compared. A data set of 173 potentially active compounds belonging to the hydrazide family and represented by 96 descriptors was analyzed. Models were built with Multiple Linear Regressions (MLR), single Feed-Forward Neural Networks (FFNNs), ensembles of FFNNs and Associative Neural Networks (AsNNs) using four different data sets and different types of descriptors. The predictive ability of the different techniques used were assessed and discussed on the basis of different validation criteria and results show in general a better performance of AsNNs in terms of learning ability and prediction of antitubercular behaviors when compared with all other methods. MLR have, however, the advantage of pinpointing the most relevant molecular characteristics responsible for the behavior of these compounds against Mycobacterium tuberculosis. The best results for the larger data set (94 compounds in training set and 18 in test set) were obtained with AsNNs using seven descriptors (R(2) of 0.874 and RMSE of 0.437 against R(2) of 0.845 and RMSE of 0.472 in MLRs, for test set). Counter-Propagation Neural Networks (CPNNs) were trained with the same data sets and descriptors. From the scrutiny of the weight levels in each CPNN and the information retrieved from MLRs, a rational design of potentially active compounds was attempted. Two new compounds were synthesized and tested against M. tuberculosis showing an activity close to that predicted by the majority of the models. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  11. Metaheuristic and Machine Learning Models for TFE-731-2, PW4056, and JT8D-9 Cruise Thrust

    NASA Astrophysics Data System (ADS)

    Baklacioglu, Tolga

    2017-08-01

    The requirement for an accurate engine thrust model has a major antecedence in airline fuel saving programs, assessment of environmental effects of fuel consumption, emissions reduction studies, and air traffic management applications. In this study, utilizing engine manufacturers' real data, a metaheuristic model based on genetic algorithms (GAs) and a machine learning model based on neural networks (NNs) trained with Levenberg-Marquardt (LM), delta-bar-delta (DBD), and conjugate gradient (CG) algorithms were accomplished to incorporate the effect of both flight altitude and Mach number in the estimation of thrust. For the GA model, the analysis of population size impact on the model's accuracy and effect of number of data on model coefficients were also performed. For the NN model, design of optimum topology was searched for one- and two-hidden-layer networks. Predicted thrust values presented a close agreement with real thrust data for both models, among which LM trained NNs gave the best accuracies.

  12. Real-time identification of vehicle motion-modes using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Lifu; Zhang, Nong; Du, Haiping

    2015-01-01

    A four-wheel ground vehicle has three body-dominated motion-modes, that is, bounce, roll, and pitch motion-modes. Real-time identification of these motion-modes can make vehicle suspensions, in particular, active suspensions, target on the dominant motion-mode and apply appropriate control strategies to improve its performance with less power consumption. Recently, a motion-mode energy method (MEM) was developed to identify the vehicle body motion-modes. However, this method requires the measurement of full vehicle states and road inputs, which are not always available in practice. This paper proposes an alternative approach to identify vehicle primary motion-modes with acceptable accuracy by employing neural networks (NNs). The effectiveness of the trained NNs is verified on a 10-DOF full-car model under various types of excitation inputs. The results confirm that the proposed method is effective in determining vehicle primary motion-modes with comparable accuracy to the MEM method. Experimental data is further used to validate the proposed method.

  13. Neural-Learning-Based Telerobot Control With Guaranteed Performance.

    PubMed

    Yang, Chenguang; Wang, Xinyu; Cheng, Long; Ma, Hongbin

    2017-10-01

    In this paper, a neural networks (NNs) enhanced telerobot control system is designed and tested on a Baxter robot. Guaranteed performance of the telerobot control system is achieved at both kinematic and dynamic levels. At kinematic level, automatic collision avoidance is achieved by the control design at the kinematic level exploiting the joint space redundancy, thus the human operator would be able to only concentrate on motion of robot's end-effector without concern on possible collision. A posture restoration scheme is also integrated based on a simulated parallel system to enable the manipulator restore back to the natural posture in the absence of obstacles. At dynamic level, adaptive control using radial basis function NNs is developed to compensate for the effect caused by the internal and external uncertainties, e.g., unknown payload. Both the steady state and the transient performance are guaranteed to satisfy a prescribed performance requirement. Comparative experiments have been performed to test the effectiveness and to demonstrate the guaranteed performance of the proposed methods.

  14. Adaptive Neural Networks Prescribed Performance Control Design for Switched Interconnected Uncertain Nonlinear Systems.

    PubMed

    Li, Yongming; Tong, Shaocheng

    2017-06-28

    In this paper, an adaptive neural networks (NNs)-based decentralized control scheme with the prescribed performance is proposed for uncertain switched nonstrict-feedback interconnected nonlinear systems. It is assumed that nonlinear interconnected terms and nonlinear functions of the concerned systems are unknown, and also the switching signals are unknown and arbitrary. A linear state estimator is constructed to solve the problem of unmeasured states. The NNs are employed to approximate unknown interconnected terms and nonlinear functions. A new output feedback decentralized control scheme is developed by using the adaptive backstepping design technique. The control design problem of nonlinear interconnected switched systems with unknown switching signals can be solved by the proposed scheme, and only a tuning parameter is needed for each subsystem. The proposed scheme can ensure that all variables of the control systems are semi-globally uniformly ultimately bounded and the tracking errors converge to a small residual set with the prescribed performance bound. The effectiveness of the proposed control approach is verified by some simulation results.

  15. Protein subcellular localization prediction using artificial intelligence technology.

    PubMed

    Nair, Rajesh; Rost, Burkhard

    2008-01-01

    Proteins perform many important tasks in living organisms, such as catalysis of biochemical reactions, transport of nutrients, and recognition and transmission of signals. The plethora of aspects of the role of any particular protein is referred to as its "function." One aspect of protein function that has been the target of intensive research by computational biologists is its subcellular localization. Proteins must be localized in the same subcellular compartment to cooperate toward a common physiological function. Aberrant subcellular localization of proteins can result in several diseases, including kidney stones, cancer, and Alzheimer's disease. To date, sequence homology remains the most widely used method for inferring the function of a protein. However, the application of advanced artificial intelligence (AI)-based techniques in recent years has resulted in significant improvements in our ability to predict the subcellular localization of a protein. The prediction accuracy has risen steadily over the years, in large part due to the application of AI-based methods such as hidden Markov models (HMMs), neural networks (NNs), and support vector machines (SVMs), although the availability of larger experimental datasets has also played a role. Automatic methods that mine textual information from the biological literature and molecular biology databases have considerably sped up the process of annotation for proteins for which some information regarding function is available in the literature. State-of-the-art methods based on NNs and HMMs can predict the presence of N-terminal sorting signals extremely accurately. Ab initio methods that predict subcellular localization for any protein sequence using only the native amino acid sequence and features predicted from the native sequence have shown the most remarkable improvements. The prediction accuracy of these methods has increased by over 30% in the past decade. The accuracy of these methods is now on par with high-throughput methods for predicting localization, and they are beginning to play an important role in directing experimental research. In this chapter, we review some of the most important methods for the prediction of subcellular localization.

  16. Neural networks in astronomy.

    PubMed

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  17. Listening and Note-Taking in Higher Education.

    ERIC Educational Resources Information Center

    Fahmy, Jane Jackson; Bilton, Linda

    A study at Sultan Qaboos University in Oman investigated the listening comprehension problems of students who were non-native speakers of English (NNS), in lectures by native English-speaking professors. Two professors with no previous experience in teaching non-native speakers introduced geology in 4 weeks of lectures. Instances of vocabulary…

  18. Co-Construction of Nonnative Speaker Identity in Cross-Cultural Interaction

    ERIC Educational Resources Information Center

    Park, Jae-Eun

    2007-01-01

    Informed by Conversation Analysis, this paper examines discursive practices through which nonnative speaker (NNS) identity is constituted in relation to native speaker (NS) identity in naturally occurring English conversations. Drawing on studies of social interaction that view identity as intrinsically a social, dialogic, negotiable entity, I…

  19. Chat-Line Interaction and Negative Feedback.

    ERIC Educational Resources Information Center

    Iwasaki, Junko; Oliver, Rhonda

    2003-01-01

    Examines communicative interactions between native speakers (NSs) and nonnative speakers (NNSs) of Japanese on Internet relay chat, with a special focus on implicit negative feedback in the interactions. Reports that NSs of Japanese gave implicit negative feedback to their NNS partners and NNSs used the feedback in their subsequent production, but…

  20. Comprehension in NS-NNS Conversation.

    ERIC Educational Resources Information Center

    Nikko, Tuija

    A study of interlanguage comprehension, part of a larger project by the Gothenburg research group, investigated the telephone conversations between advanced learners and native speakers of Swedish. In four of the eight conversations, the non-native speakers called the public library to get information on how to borrow books; in the other four the…

  1. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  2. Refusals in Chinese: How Do L1 and L2 Differ?

    ERIC Educational Resources Information Center

    Hong, Wei

    2011-01-01

    This article reports on an empirical study of refusal strategies in Chinese by native speakers (NS) and nonnative Chinese learners (NNS). Sixty subjects (perceived as "students") were to refuse an invitation by "the professor" to a Chinese New Year's party. The study found that the NS group produced 10 strategies, whereas the…

  3. Epistemic Modality in the Argumentative Essays of Chinese EFL Learners

    ERIC Educational Resources Information Center

    Hu, Chunyu; Li, Xuyan

    2015-01-01

    Central to argumentative writing is the proper use of epistemic devices (EDs), which distinguish writers' opinions from facts and evaluate the degree of certainty expressed in their statements. Important as these devices are, they turn out to constitute a thorny area for non-native speakers (NNS). Previous research indicates that Chinese EFL…

  4. Conceptualizing and Confronting Inequity: Approaches within and New Directions for the "NNEST Movement"

    ERIC Educational Resources Information Center

    Rudolph, Nathanael; Selvi, Ali Fuad; Yazan, Bedrettin

    2015-01-01

    This article examines inequity as conceptualized and approached within and through the non-native English speakers in TESOL (NNEST) "movement." The authors unpack critical approaches to the NNEST experience, conceptualized via binaries (NS/NNS; NEST/NNEST). The authors then explore postmodern and poststructural approaches to identity and…

  5. NNS Students' Arguments in English: Observations in Formal and Informal Contexts

    ERIC Educational Resources Information Center

    Chandrasegaran, Antonia

    2008-01-01

    The ability to construct supported arguments in English is important for academic success in educational contexts where English is the language of instruction and student assessment is mediated through the academic essay. Starting from the hypothesis that students schooled in an English-medium education system do engage in friendly argument in…

  6. Finding Inquiry in Discourses of Audit and Reform in Primary Schools

    ERIC Educational Resources Information Center

    Williams, Julian; Corbin, Brian; McNamara, Olwen

    2007-01-01

    In this paper we examine the discourses of Primary school numeracy coordinators responsible for auditing, monitoring and supporting their colleagues in relation to the introduction and embedding of the National Numeracy Strategy (NNS) in the UK. Cultural-Historical Activity Theory (CHAT) focuses our analysis on the contradictory coupling of…

  7. The Acquisition of the Korean Honorific Affix "(u)si" by Advanced L2 Learners

    ERIC Educational Resources Information Center

    Mueller, Jeansue; Jiang, Nan

    2013-01-01

    An experiment investigated adult language learners' ability to develop fully integrated cognitive representations of a difficult second language (L2) morphosyntactic feature: the Korean honorific verbal affix "(u)si." Native speaker (NS) and nonnative speaker (NNS) latencies during a word-by-word self-paced reading comprehension task…

  8. Teaching Scientific/Academic Writing in the Digital Age

    ERIC Educational Resources Information Center

    Peretz, Arna

    2005-01-01

    This paper describes a graduate-level scientific/academic writing course for non-native speakers (NNS) of English at Ben-Gurion University of the Negev (BGU), Israel, which is taught in a technology-enhanced or blended learning environment. The use and integration of electronic discourses, such as email and Powerpoint, on-screen marking…

  9. The Interpretability Hypothesis: Evidence from Wh-Interrogatives in Second Language Acquisition

    ERIC Educational Resources Information Center

    Tsimpli, Ianthi Maria; Dimitrakopoulou, Maria

    2007-01-01

    The second language acquisition (SLA) literature reports numerous studies of proficient second language (L2) speakers who diverge significantly from native speakers despite the evidence offered by the L2 input. Recent SLA theories have attempted to account for native speaker/non-native speaker (NS/NNS) divergence by arguing for the dissociation…

  10. Evolution on neutral networks accelerates the ticking rate of the molecular clock.

    PubMed

    Manrubia, Susanna; Cuesta, José A

    2015-01-06

    Large sets of genotypes give rise to the same phenotype, because phenotypic expression is highly redundant. Accordingly, a population can accept mutations without altering its phenotype, as long as the genotype mutates into another one on the same set. By linking every pair of genotypes that are mutually accessible through mutation, genotypes organize themselves into neutral networks (NNs). These networks are known to be heterogeneous and assortative, and these properties affect the evolutionary dynamics of the population. By studying the dynamics of populations on NNs with arbitrary topology, we analyse the effect of assortativity, of NN (phenotype) fitness and of network size. We find that the probability that the population leaves the network is smaller the longer the time spent on it. This progressive 'phenotypic entrapment' entails a systematic increase in the overdispersion of the process with time and an acceleration in the fixation rate of neutral mutations. We also quantify the variation of these effects with the size of the phenotype and with its fitness relative to that of neighbouring alternatives. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Evolution on neutral networks accelerates the ticking rate of the molecular clock

    PubMed Central

    Manrubia, Susanna; Cuesta, José A.

    2015-01-01

    Large sets of genotypes give rise to the same phenotype, because phenotypic expression is highly redundant. Accordingly, a population can accept mutations without altering its phenotype, as long as the genotype mutates into another one on the same set. By linking every pair of genotypes that are mutually accessible through mutation, genotypes organize themselves into neutral networks (NNs). These networks are known to be heterogeneous and assortative, and these properties affect the evolutionary dynamics of the population. By studying the dynamics of populations on NNs with arbitrary topology, we analyse the effect of assortativity, of NN (phenotype) fitness and of network size. We find that the probability that the population leaves the network is smaller the longer the time spent on it. This progressive ‘phenotypic entrapment’ entails a systematic increase in the overdispersion of the process with time and an acceleration in the fixation rate of neutral mutations. We also quantify the variation of these effects with the size of the phenotype and with its fitness relative to that of neighbouring alternatives. PMID:25392402

  12. The reliability of in-hospital diagnoses of diabetes mellitus in the setting of an acute myocardial infarction

    PubMed Central

    Arnold, Suzanne V; Lipska, Kasia J; Inzucchi, Silvio E; Li, Yan; Jones, Philip G; McGuire, Darren K; Goyal, Abhinav; Stolker, Joshua M; Lind, Marcus; Spertus, John A; Kosiborod, Mikhail

    2014-01-01

    Objective Incident diabetes mellitus (DM) is important to recognize in patients with acute myocardial infarction (AMI). To develop an efficient screening strategy, we explored the use of random plasma glucose (RPG) at admission and fasting plasma glucose (FPG) to select patients with AMI for glycosylated hemoglobin (HbA1c) testing. Design, setting, andparticipants Prospective registry of 1574 patients with AMI not taking glucose-lowering medication from 24 US hospitals. All patients had HbA1c measured at a core laboratory and admission RPG and ≥2 FPGs recorded during hospitalization. We examined potential combinations of RPG and FPG and compared these with HbA1c≥6.5%—considered the gold standard for DM diagnosis in these analyses. Results An RPG>140 mg/dL or FPG≥126 mg/dL had high sensitivity for DM diagnosis. Combining these into a screening protocol (if admission RPG>140, check HbA1c; or if FPG≥126 on a subsequent day, check HbA1c) led to HbA1c testing in 50% of patients and identified 86% with incident DM (number needed to screen (NNS)=3.3 to identify 1 case of DM; vs NNS=5.6 with universal HbA1c screening). Alternatively, using an RPG>180 led to HbA1c testing in 40% of patients with AMI and identified 82% of DM (NNS=2.7). Conclusions We have established two potential selective screening methods for DM in the setting of AMI that could identify the vast majority of incident DM by targeted screening of 40–50% of patients with AMI with HbA1c testing. Using these methods may efficiently identify patients with AMI with DM so that appropriate education and treatment can be promptly initiated. PMID:25452878

  13. Extraction of cesium, strontium and the platinium group metals from acidic high activity nuclear waste using a Purex process compatible organic extractant. Final report, December 15, 1980-August 15, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, M.W. Jr.; Van Brunt, V.

    1984-09-14

    Purex process compatible organic systems which selectively and reversibly extract cesium, strontium, and palladium from synthetic mixed fission product solutions containing 3M HNO/sub 3/ have been developed. This advance makes the development of continuous solvent extraction processes for their recovery more likely. The most favorable cesium and strontium complexing solutions have been tested for radiation stability to 10/sup 7/ rad using a 0.4 x 10/sup 7/ rad/h /sup 60/Co source. The distribution coefficients dropped somewhat but remained above unity. For cesium the complexing organic solution is 5 vol % (0.1M) NNS, 27 vol % TBP and 68 vol % kerosenemore » containing 0.05m Bis 4,4',(5')(1-hydroxy 2-ethylhexyl)-benzo 18-crown-6 (Crown XVII). The NNS is a sulfonic acid cation exchanger. With an aqueous phase containing 0.006M Cs/sup +1/ in contact with an equal volume of extractant the D org/aq = 1.6 at a temperature of 25 to 35/sup 0/C. For strontium the complexing organic solution is 5 vol % (0.1M) NNS, 27 vol % TBP and 68 vol % Kerosene containing 0.02M Bis 4,4'(5') (1-hydroxyheptyl)cyclohexo 18-crown-6 (Crown XVI). With an aqueous phase containing 0.003M Sr/sup +2/ in contact with an equal volume of extractant the D org/aq = 1.98 at a temperature of 25 to 35/sup 0/C. For palladium the complexing organic solution consisted of a ratio of TBP/kerosene of 0.667 containing 0.3M Alamine 336 which is a tertiary amine anion exchanger. With an aqueous phase containing 0.0045M Pd/sup +/ in contact with an equal volume of extractant the D org/aq = 1.95 at a temperature of 25 to 35/sup 0/C.« less

  14. Using Deep Learning for Compound Selectivity Prediction.

    PubMed

    Zhang, Ruisheng; Li, Juan; Lu, Jingjing; Hu, Rongjing; Yuan, Yongna; Zhao, Zhili

    2016-01-01

    Compound selectivity prediction plays an important role in identifying potential compounds that bind to the target of interest with high affinity. However, there is still short of efficient and accurate computational approaches to analyze and predict compound selectivity. In this paper, we propose two methods to improve the compound selectivity prediction. We employ an improved multitask learning method in Neural Networks (NNs), which not only incorporates both activity and selectivity for other targets, but also uses a probabilistic classifier with a logistic regression. We further improve the compound selectivity prediction by using the multitask learning method in Deep Belief Networks (DBNs) which can build a distributed representation model and improve the generalization of the shared tasks. In addition, we assign different weights to the auxiliary tasks that are related to the primary selectivity prediction task. In contrast to other related work, our methods greatly improve the accuracy of the compound selectivity prediction, in particular, using the multitask learning in DBNs with modified weights obtains the best performance.

  15. Navy POD

    Science.gov Websites

    Naval History Contact Us Command Addresses (SNDL) FAQ Leadership Secretary of the Navy Under Secretary Chiefs of Staff Defense.gov U.S. Army U.S. Air Force U.S. Marine Corps U.S. Coast Guard Naval History inclusivity and pluralism. This Day in Naval History - May 27 History RSS Story Number: NNS020131-28 Release

  16. Dialogues in the "Global Village:" NNS/NS Collaboration in Classroom Interaction

    ERIC Educational Resources Information Center

    Hall, Joan Kelly; Hendricks, Sean; Orr, Jeffery Lee

    2004-01-01

    This study is concerned with interactional involvement and identity construction in a university seminar comprised of native and non-native English speaking students. Findings reveal that in their classroom interactions, these two groups of students take on and perceive others to take on identities that have little to do with their language status…

  17. Hedging, Inflating, and Persuading in L2 Academic Writing

    ERIC Educational Resources Information Center

    Hinkel, Eli

    2005-01-01

    This study analyzes the types and frequencies of hedges and intensifiers employed in NS and NNS academic essays included in a corpus of L1 and L2 student academic texts (745 essays/220,747 words). The overarching goal of this investigation is to focus on these lexical and syntactic features of written discourse because they effectively lend…

  18. Native Speaker Dichotomy: Stakeholders' Preferences and Perceptions of Native and Non-Native Speaking English Language Teachers

    ERIC Educational Resources Information Center

    Atamturk, Nurdan; Atamturk, Hakan; Dimililer, Celen

    2018-01-01

    Addressing the perceptions and the preferences of the upper-secondary school students, teachers, parents and administrators of the native speaking (NS) and non-native speaking (NNS) English teachers as well as investigating the variables affecting these preferences and perceptions, this study explores whether or not the native speaker myth is…

  19. The relative reinforcing value of snack foods in response to consumption of sugar- or non-nutritive-sweetened beverages

    USDA-ARS?s Scientific Manuscript database

    The effects of sugar and non-nutritive sweetener on regulation of appetite and energy intake remain controversial. Using a behavioral economic choice paradigm, we sought to determine the effects of consuming a sugar-sweetened (S) or a non-nutritive sweetened (NNS) beverage on appetite and the relati...

  20. "My Major Is English, Believe It or Not:)" -- Participant Orientations in Nonnative/Native Text Chat

    ERIC Educational Resources Information Center

    Vandergriff, Ilona

    2013-01-01

    In their interactions with native speakers (NS), nonnative speakers (NNS) often position themselves as relative novices. For example, they may orient to the language expertise differential by apologizing for their linguistic ineptness or by making self-disparaging remarks about their second language (L2). This is true even for advanced learners in…

  1. The Role of Interaction in Native Speaker Comprehension of Nonnative Speaker Speech.

    ERIC Educational Resources Information Center

    Polio, Charlene; Gass, Susan M.

    1998-01-01

    Because interaction gives language learners an opportunity to modify their speech upon a signal of noncomprehension, it should also have a positive effect on native speakers' (NS) comprehension of nonnative speakers (NNS). This study shows that interaction does help NSs comprehend NNSs, contrasting the claims of an earlier study that found no…

  2. Computational intelligence for target assessment in Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Micheli-Tzanakou, Evangelia; Hamilton, J. L.; Zheng, J.; Lehman, Richard M.

    2001-11-01

    Recent advances in image and signal processing have created a new challenging environment for biomedical engineers. Methods that were developed for different fields are now finding a fertile ground in biomedicine, especially in the analysis of bio-signals and in the understanding of images. More and more, these methods are used in the operating room, helping surgeons, and in the physician's office as aids for diagnostic purposes. Neural Network (NN) research on the other hand, has gone a long way in the past decade. NNs now consist of many thousands of highly interconnected processing elements that can encode, store and recall relationships between different patterns by altering the weighting coefficients of inputs in a systematic way. Although they can generate reasonable outputs from unknown input patterns, and can tolerate a great deal of noise, they are very slow when run on a serial machine. We have used advanced signal processing and innovative image processing methods that are used along with computational intelligence for diagnostic purposes and as visualization aids inside and outside the operating room. Applications to be discussed include EEGs and field potentials in Parkinson's disease along with 3D reconstruction of MR or fMR brain images in Parkinson's patients, are currently used in the operating room for Pallidotomies and Deep Brain Stimulation (DBS).

  3. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  4. Supporting English-Medium Pedagogy through an Online Corpus of Science and Engineering Lectures

    ERIC Educational Resources Information Center

    Kunioshi, Nílson; Noguchi, Judy; Tojo, Kazuko; Hayashi, Hiroko

    2016-01-01

    As English-medium instruction (EMI) spreads around the world, university teachers and students who are non-native speakers of English (NNS) need to put much effort into the delivery or reception of content. Construction of scientific meaning in the process of learning is already complex when instruction is delivered in the first language of the…

  5. Cracking the Code of Press Headlines: From Difficulty to Opportunity for the Foreign Language Learner

    ERIC Educational Resources Information Center

    White, Michael

    2011-01-01

    While press materials, are widely used both as an ESP materials resource and as a research source by ESP practitioners, press headlines in English confront the Non Native Speaker (NNS) and to some extent the Native Speaker (NS) with a notorious paradox: headlines are crafted to raise communication potential and yet, rather than communicate, they…

  6. When the Native Is Also a Non-Native: "Retrodicting" the Complexity of Language Teacher Cognition

    ERIC Educational Resources Information Center

    Aslan, Erhan

    2015-01-01

    The impact of native (NS) and non-native speaker (NNS) identities on second or foreign language teachers' cognition and practices in the classroom has mainly been investigated in ESL/EFL contexts. Using complexity theory as a framework, this case study attempts to fill the gap in the literature by presenting a foreign language teacher in the…

  7. Interactional Features of Repair Negotiation in NS-NNS Interaction on Two Task Types: Information Gap and Personal Information Exchange

    ERIC Educational Resources Information Center

    Kitajima, Ryu

    2013-01-01

    The studies in task-based approaches in second language acquisition claim that controlled and goal convergent tasks such as information gap tasks surpass open-ended conversations such as personal information exchange tasks for the development of the learner's interlanguage, in that the formers promote more repair negotiation. And yet, few studies…

  8. Phonetic Parameters and Perceptual Judgments of Accent in English by American and Japanese Listeners

    ERIC Educational Resources Information Center

    Riney, Timothy J.; Takagi, Naoyuki; Inutsuka, Kumiko

    2005-01-01

    In this study we identify some of the phonetic parameters that correlate with nonnative speakers' (NNSs) perceptual judgments of accent in English and investigate NNS listener perceptions of English from a World Englishes point of view. Our main experiment involved 3,200 assessments of the perceived degree of accent in English of two speaker…

  9. Merging Imagery and Models for River Current Prediction

    DTIC Science & Technology

    2011-01-01

    synthetically generated bathymetry. Measured Batbymel ry Synthetic Batbymel rj Mooring Mean Difference (nn/s) Correlation Mian Difference (cm s...8217orrelal ion Al Mi 0.90 27 077 A 2 17 0.86 21 n 36 Bl 17 0.87 21 B2 17 0.89 2 1 DJO 133 li , 0.87 23 0.87 is that mean differences between tlie

  10. Teaching in the Foreign Language Classroom: How Being a Native or Non-Native Speaker of German Influences Culture Teaching

    ERIC Educational Resources Information Center

    Ghanem, Carla

    2015-01-01

    The study explores the complexities associated with graduate language instructors' NS/NNS identities and teaching of culture. Researchers, who work mainly in the English as a Second/Foreign Language field, have been discussing this divide and have examined the advantages and disadvantages each group brings to the profession, but not the influence…

  11. A Corpus-Based EAP Course for NNS Doctoral Students: Moving from Available Specialized Corpora to Self-Compiled Corpora

    ERIC Educational Resources Information Center

    Lee, David; Swales, John

    2006-01-01

    This paper presents a discussion of an experimental, innovative course in corpus-informed EAP for doctoral students. Participants were given access to specialized corpora of academic writing and speaking, instructed in the tools of the trade (web- and PC-based concordancers) and gradually inducted into the skills needed to best exploit the data…

  12. Assessment of Metacognitive Knowledge among Science Students, a Case Study of Two Bilingual and Two NNS Students

    ERIC Educational Resources Information Center

    Ali, Gadacha

    2007-01-01

    This investigation aims to assess awareness of genre and writing skills among science students via an abstract writing task, with recall and follow-up protocols to monitor the students, and to characterize the relationship between the abstract and the base article. Abstract writing involves specific data selection techniques of activities involved…

  13. The Decolonial Option in English Teaching: Can the Subaltern Act?

    ERIC Educational Resources Information Center

    Kumaravadivelu, B.

    2016-01-01

    In this reflective article that straddles the personal and the professional, the author shares his critical thoughts on the impact of the steady stream of discourse on the native speaker/nonnative speaker (NS/NNS) inequity in the field of TESOL. His contention is that more than a quarter century of the discoursal output has not in any significant…

  14. NS and NNS Scientists' Amendments of Dutch Scientific English and Their Impact on Hedging

    ERIC Educational Resources Information Center

    Burrough-Boenisch, Joy

    2005-01-01

    When 45 biologists from eight countries were asked to critically read and amend the English in Discussion sections of three Dutch-authored draft research papers, many of their alterations impacted on the hedging. This article discusses these alterations. In particular, it focuses on the hotspots in the texts, i.e., the points on which several…

  15. Negotiable Acceptability: Reflections on the Interactions between Language Professionals in Europe and NNS Scientists Wishing to Publish in English

    ERIC Educational Resources Information Center

    Burrough-Boenisch, Joy

    2006-01-01

    Prior to submitting a paper to a science journal, many European scientists employ language professionals to check that the English is acceptable. What influences these language professionals' criteria of acceptability? How do they interact with the authors for whom they work? And how do journals' criteria of acceptability affect their work? In…

  16. The Development of Pragmatic Competence and Perception of Requests by American Learners of the Russian Language

    ERIC Educational Resources Information Center

    Dunn, Valentina Nikolayevna Amelkina

    2012-01-01

    This study investigates the cross-cultural realization of request patterns. The goal of the study is to compare the realization of requests produced by adult American English speaking learners of Russian (NNS) to that of native speakers of Russian and native speakers of English to identify the similarities and differences between native and…

  17. Voice vs. Text Chats: Their Efficacy for Learning Probing Questions by Non-Native Speaking Medical Professionals in Online Courses

    ERIC Educational Resources Information Center

    Ellis, Olga

    2012-01-01

    Through an English for Specific Purposes (ESP): Communication in Nursing online course, the present study examines the efficacy of synchronous voice-based and text-based chats as instructional and communicative modes in learning to use open questions for probing in therapeutic dialogues by non-native speaking (NNS) participants, students of a…

  18. The Presentation of EIL in Kuwait: Students' Expectations and Needs

    ERIC Educational Resources Information Center

    Taqi, Hanan A.; Akbar, Rahima S.

    2015-01-01

    The teaching of native-like accents has been the aim of many EFL educationists long ago; however, this concept is heading towards a major change. Hence, the idea of this paper is based on Jenkins' (2000 & 2002) theory of English as an International Language (EIL). Jenkins' theory analyses the use of English by non-natives speakers (NNS) where…

  19. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  20. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  1. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  2. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  3. "I'm Very Not About the Law Part": Nonnative Speakers of English and the Miranda Warnings

    ERIC Educational Resources Information Center

    Pavlenko, Aneta

    2008-01-01

    This article presents a case study of a police interrogation of a nonnative speaker (NNS) of English. I show that the high linguistic and conceptual complexity of police cautions, such as the Miranda warnings, complicates understanding of these texts even by NNSs of English with a high level of interactional competence. I argue that the U.S.…

  4. An Exploratory Study of Criticism Realization Strategies Used By NS and NNS of New Zealand English

    ERIC Educational Resources Information Center

    Nguyen, Thi Thuy Minh

    2013-01-01

    This study explores how a group of learners of English as a second language (ESL) criticize in everyday situations compared to the native speaker (NS) with a view to expanding the range of speech acts under inquiry in the interlanguage pragmatics (ILP) literature. Data were collected from five NSs of New Zealand English and five intermediate…

  5. A Longitudinal Study of the Use of the First Language (L1) in French Foreign Language (FL) Classes

    ERIC Educational Resources Information Center

    White, Erin; Storch, Neomy

    2012-01-01

    This longitudinal study investigated teachers' use of the first language (L1) in two French foreign language (FL) intermediate level classes at two Australian universities. A native French-speaking teacher (NS) and a non-native French-speaking teacher (NNS) were observed and audio-recorded approximately every two weeks over a 12- week semester.…

  6. Genital Flora, Prolonged Rupture of the Membranes and the Risk of Early Onset Neonatal Septicemia in Qatif Central Hospital, Kingdom of Saudi Arabia.

    ERIC Educational Resources Information Center

    Srair, Hussain Abu; And Others

    1993-01-01

    Evaluated 108 mothers and their newborn babies for bacterial colonization and neonatal septicemia (NNS) after membranes had ruptured for 24 hours or more. Nearly 40% of the babies were already colonized at birth. The three most common bacteria isolated from the babies were Escherichia coli, Group B Streptococcus, and Streptococcus faecalis. (MDM)

  7. Defense.gov - Special Report - H1N1 Flu: Facing the H1N1 Flu

    Science.gov Websites

    Learned WASHINGTON, Nov. 6, 2009 - Senior medical officials who successfully slowed the spread of H1N1 flu Crucial To Fleet Readiness NORFOLK (NNS) -- Commands and medical clinics throughout U.S. Fleet Forces , Ghana. Story» Naval Medical Center Portsmouth Works to Immunize Against Flu PORTSMOUTH, Va., Dec. 15

  8. Health outcomes of non-nutritive sweeteners: analysis of the research landscape.

    PubMed

    Lohner, Szimonetta; Toews, Ingrid; Meerpohl, Joerg J

    2017-09-08

    Food products containing non-nutritive sweeteners (NNSs) instead of sugar have become increasingly popular in the last decades. Their appeal is obviously related to their calorie-free sweet taste. However, with the dramatic increase in their consumption, it is reasonable and timely to evaluate their potential health benefits and, more importantly, potential adverse effects. The main aim of this scoping review was to map the evidence about health outcomes possibly associated with regular NNS consumption by examining the extent, range, and nature of research activity in this area. We systematically searched Ovid MEDLINE, EMBASE and the Cochrane CENTRAL databases for studies on NNSs (artificial sweeteners or natural, non-caloric sweeteners, either used individually or in combination) using text terms with appropriate truncation and relevant indexing terms. All human studies investigating any health outcomes of a NNS intervention or exposure were eligible for inclusion. No studies were excluded based on language, study design or methodological quality. Data for each health outcome were summarized in tabular form and were discussed narratively. Finally, we included 372 studies in our scoping review, comprising 15 systematic reviews, 155 randomized controlled trials (RCTs), 23 non-randomized controlled trials, 57 cohort studies, 52 case-control studies, 28 cross sectional studies and 42 case series/case reports. In healthy subjects, appetite and short term food intake, risk of cancer, risk of diabetes, risk of dental caries, weight gain and risk of obesity are the most investigated health outcomes. Overall there is no conclusive evidence for beneficial and harmful effects on those outcomes. Numerous health outcomes including headaches, depression, behavioral and cognitive effects, neurological effects, risk of preterm delivery, cardiovascular effects or risk of chronic kidney disease were investigated in fewer studies and further research is needed. In subjects with diabetes and hypertension, the evidence regarding health outcomes of NNS use is also inconsistent. This scoping review identifies the needs for future research to address the numerous evidence gaps related to health effects of NNSs use.It also specifies the research questions and areas where a systematic review with meta-analyses is required for the proper evaluation of health outcomes associated to regular NNSs consumption.

  9. Entry Abort Determination Using Non-Adaptive Neural Networks for Mars Precision Landers

    NASA Technical Reports Server (NTRS)

    Graybeal, Sarah R.; Kranzusch, Kara M.

    2005-01-01

    The 2009 Mars Science Laboratory (MSL) will attempt the first precision landing on Mars using a modified version of the Apollo Earth entry guidance program. The guidance routine, Entry Terminal Point Controller (ETPC), commands the deployment of a supersonic parachute after converging the range to the landing target. For very dispersed cases, ETPC may not converge the range to the target and safely command parachute deployment within Mach number and dynamic pressure constraints. A full-lift up abort can save 85% of these failed trajectories while abandoning the precision landing objective. Though current MSL requirements do not call for an abort capability, an autonomous abort capability may be desired, for this mission or future Mars precision landers, to make the vehicle more robust. The application of artificial neural networks (NNs) as an abort determination technique was evaluated by personnel at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC). In order to implement an abort, a failed trajectory needs to be recognized in real time. Abort determination is dependent upon several trajectory parameters whose relationships to vehicle survival are not well understood, and yet the lander must be trained to recognize unsafe situations. Artificial neural networks (NNs) provide a way to model these parameters and can provide MSL with the artificial intelligence necessary to independently declare an abort. Using the 2009 Mars Science Laboratory (MSL) mission as a case study, a non-adaptive NN was designed, trained and tested using Monte Carlo simulations of MSL descent and incorporated into ETPC. Neural network theory, the development history of the MSL NN, and initial testing with severe dust storm entry trajectory cases are discussed in Reference 1 and will not be repeated here. That analysis demonstrated that NNs are capable of recognizing failed descent trajectories and can significantly increase the survivability of MSL for very dispersed cases. NN testing was then broadened to evaluate fully dispersed entry trajectories. The NN correctly classified 99.7% of descent trajectories as abort or nonabort and reduced the probability of an unsafe parachute deployment by 83%. This second, broader testing phase is discussed in this paper.

  10. Evaluating cytology for the detection of invasive cervical cancer.

    PubMed

    Landy, R; Castanon, A; Hamilton, W; Lim, A W W; Dudding, N; Hollingworth, A; Sasieni, P D

    2016-06-01

    To assess the sensitivity, the number needed to screen (NNS) and the positive predictive value (PPV) of cervical cytology for the diagnosis of cancer by age in a screening population. A retrospective cohort of women with invasive cervical cancer nested within a census of cervical cytology. All (c. 8 million) women aged 20-64 years with cervical cytology (excluding tests after an earlier abnormality). From April 2007 to March 2010, 3372 women had cervical cancer diagnosed within 12 months of such cytology in England. The sensitivity of cervical cytology to cancer, NNS to detect one cancer and predictive values of cytology were calculated for various 'referral' thresholds. These were calculated for ages 20-24, 25-34, 35-49 and 50-64 years. The sensitivity of at least moderate dyskaryosis [equivalent to a high-grade squamous intraepithelial lesion (HSIL) or worse] for cancer of 89.4% [95% confidence interval (CI) 88.3-90.4%] in women offered screening was independent of age. At all ages, women with borderline-early recall or mild dyskaryosis on cytology (equivalent to ASC-US and LSIL, respectively, in the Bethesda system) had a similar risk of cervical cancer to the risk in all women tested. The PPV of severe dyskaryosis/?invasive and ?glandular neoplasia cytology (equivalent to squamous cell carcinoma and adenocarcinoma/adenocarcinoma in situ, respectively, in the Bethesda System) were 34% and 12%, respectively; the PPV of severe dyskaryosis (HSIL: severe dysplasia) was 4%. The NNS was lowest when the incidence of cervical cancer was highest, at ages 25-39 years, but the proportion of those with abnormal cytology who have cancer was also lowest in younger women. The PPV of at least severe dyskaryosis (HSIL: severe dysplasia) for cancer was 4-10% of women aged 25-64 years, justifying a 2-week referral to colposcopy and demonstrating the importance of failsafe monitoring for such patients. The sensitivity of cytology for cervical cancer was excellent across all age groups. © 2015 The Authors Cytopathology Published by John Wiley & Sons Ltd.

  11. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.

  12. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  13. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  14. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  15. Feasibility and yield of screening for non-communicable diseases among treated tuberculosis patients in Peru.

    PubMed

    Byrne, A L; Marais, B J; Mitnick, C D; Garden, F L; Lecca, L; Contreras, C; Yauri, Y; Garcia, F; Marks, G B

    2018-01-01

    The increasing prevalence of non-communicable diseases (NCDs) poses a major challenge to low- and middle-income countries. Patients' engagement with health services for anti-tuberculosis treatment provides an opportunity for screening for NCDs and for linkage to care. We explored the feasibility and yield of screening for NCDs in patients treated for tuberculosis (TB) in Lima, Peru, as part of a study focused on chronic respiratory sequelae. A representative sample of community controls was recruited from the same geographical area. Screening entailed taking a medical history and performing ambulatory blood pressure measurement and urinalysis. A total of 177 participants with previous TB (33 with multidrug-resistant TB) and 161 community controls were evaluated. There was an almost four-fold increased prevalence of self-reported diabetes mellitus (DM) in the TB group (adjusted prevalence ratio 3.66, 95%CI 1.68-8.01). Among those without self-reported DM, 3.3% had glycosuria, with a number needed to screen (NNS) of 31. The NNS to find one (new) case of hypertension or proteinuria in the TB group was respectively 24 and 5. Patient-centred care that includes pragmatic NCD screening is feasible in TB patients, and the treatment period provides a good opportunity to link patients to ongoing care.

  16. Changes in core food intake among Australian children between 1995 and 2007.

    PubMed

    Rangan, A M; Kwan, J S L; Louie, J C Y; Flood, V M; Gill, T P

    2011-11-01

    To assess the changes in the consumption of core foods among Australian children between the 1995 National Nutrition Survey (1995 NNS) and the 2007 Australian National Children's Nutrition and Physical Activity Survey (2007 Children's Survey). Core food consumption was analysed using 24-h recall data from 2-16 year old children using the 1995 NNS (n=2435) and the 2007 Children's Survey (n=4380). Differences in percent consuming, amounts consumed and percent energy contribution were assessed. The consumption of core foods increased significantly between the 1995 and 2007 surveys, including per-capita consumption and percent energy contribution (both P0.001). Core foods contributed to 59% of energy intake in 1995 compared with 65% in 2007. The types of core foods consumed also changed during this time period with more children reporting eating healthy options such as wholemeal bread, reduced-fat milk, reduced-fat cheese and fruit in the 2007 Children's Survey. Conversely, the consumption of white bread, full-fat milk and low-fibre breakfast cereals was lower in 2007. Overall, reported dietary intake had improved from 1995 to 2007 among Australian children with an increase in the amounts of core foods consumed and healthier types of foods being chosen. Continued health-promotion activities and monitoring of food consumption are highly warranted.

  17. Development and applications of nondestructive evaluation at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Whitaker, Ann F.

    1990-01-01

    A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.

  18. Manpower Issues Involving Visit, Board, Search, and Seizure (VBSS)

    DTIC Science & Technology

    2012-03-01

    article by Lolita C. Baldor, American forces flying off the guided-missile destroyer USS Kidd responded to a distress call from the Iranian vessel, the Al...no other 2 Lolita C. Baldor, Associated Press, “USS Kidd rescues Iran boat from pirates,” January 6...Public Affairs, “VBSS: Evolving the Mission,” Story number: NNS090425-03, http://www.navy.mil/search/display.asp?story_id=44692. Baldor, Lolita C

  19. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  20. Academic Computing Facilities and Services in Higher Education--A Survey.

    ERIC Educational Resources Information Center

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  1. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  2. Global retrieval of soil moisture and vegetation properties using data-driven methods

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann

    2017-04-01

    Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre for Medium-Range Weather Forecasts (ECMWF). Finally, retrievals using radiative transfer models can also be used as a reference SM dataset for the training phase. This approach was used to retrieve soil moisture from ASMR-E, as mentioned above, and also to implement the official European Space Agency (ESA) SMOS soil moisture product in Near-Real-Time. We will finish with a discussion of the retrieval of vegetation parameters from SMOS observations using data-driven methods.

  3. Structural and thermoelectric properties of epitaxially grown Bi2Te3 thin films and superlattices

    NASA Astrophysics Data System (ADS)

    Peranio, N.; Eibl, O.; Nurnus, J.

    2006-12-01

    Multi-quantum-well structures of Bi2Te3 are predicted to have a high thermoelectric figure of merit ZT. Bi2Te3 thin films and Bi2Te3/Bi2(Te0.88Se0.12)3 superlattices (SLs) were grown epitaxially by molecular beam epitaxy on BaF2 substrates with periods of 12 and 6nm, respectively. Reflection high-energy electron diffraction confirmed a layer-by-layer growth, x-ray diffraction yielded the lattice parameters and SL periods and proved epitaxial growth. The in-plane transport coefficients were measured and the thin films and SL had power factors between 28 and 35μW /cmK2. The lattice thermal conductivity varied between 1.60W/mK for Bi2Te3 thin films and 1.01W/mK for a 10nm SL. The best figures of merit ZT were achieved for the SL; however, the values are slightly smaller than those in bulk materials. Thin films and superlattices were investigated in plan view and cross section by transmission electron microscopy. In the Bi2Te3 thin film and SL the dislocation density was found to be 2×1010cm-2. Bending of the SL with amplitudes of 30nm (12nm SL) and 15nm (6nm SL) and a wavelength of 400nm was determined. Threading dislocations were found with a density greater than 2×109cm-2. The superlattice interfaces are strongly bent in the region of the threading dislocations, undisturbed regions have a maximum lateral sie of 500nm. Thin films and SL showed a structural modulation [natural nanostructure (nns)] with a wavelength of 10nm and a wave vector parallel to (1,0,10). This nns was also observed in Bi2Te3 bulk materials and turned out to be of general character for Bi2Te3. The effect of the microstructure on the thermoelectric properties is discussed. The microstructure is governed by the superlattice, the nns, and the dislocations that are present in the films. Our results indicate that the microstructure directly affects the lattice thermal conductivity. Thermopower and electrical conductivity were found to be negatively correlated and no clear dependence of the two quantities on the microstructure could be found.

  4. Transcriptional Regulation in Ebola Virus: Effects of Gene Border Structure and Regulatory Elements on Gene Expression and Polymerase Scanning Behavior

    PubMed Central

    Brauburger, Kristina; Boehmann, Yannik; Krähling, Verena

    2015-01-01

    ABSTRACT The highly pathogenic Ebola virus (EBOV) has a nonsegmented negative-strand (NNS) RNA genome containing seven genes. The viral genes either are separated by intergenic regions (IRs) of variable length or overlap. The structure of the EBOV gene overlaps is conserved throughout all filovirus genomes and is distinct from that of the overlaps found in other NNS RNA viruses. Here, we analyzed how diverse gene borders and noncoding regions surrounding the gene borders influence transcript levels and govern polymerase behavior during viral transcription. Transcription of overlapping genes in EBOV bicistronic minigenomes followed the stop-start mechanism, similar to that followed by IR-containing gene borders. When the gene overlaps were extended, the EBOV polymerase was able to scan the template in an upstream direction. This polymerase feature seems to be generally conserved among NNS RNA virus polymerases. Analysis of IR-containing gene borders showed that the IR sequence plays only a minor role in transcription regulation. Changes in IR length were generally well tolerated, but specific IR lengths led to a strong decrease in downstream gene expression. Correlation analysis revealed that these effects were largely independent of the surrounding gene borders. Each EBOV gene contains exceptionally long untranslated regions (UTRs) flanking the open reading frame. Our data suggest that the UTRs adjacent to the gene borders are the main regulators of transcript levels. A highly complex interplay between the different cis-acting elements to modulate transcription was revealed for specific combinations of IRs and UTRs, emphasizing the importance of the noncoding regions in EBOV gene expression control. IMPORTANCE Our data extend those from previous analyses investigating the implication of noncoding regions at the EBOV gene borders for gene expression control. We show that EBOV transcription is regulated in a highly complex yet not easily predictable manner by a set of interacting cis-active elements. These findings are important not only for the design of recombinant filoviruses but also for the design of other replicon systems widely used as surrogate systems to study the filovirus replication cycle under low biosafety levels. Insights into the complex regulation of EBOV transcription conveyed by noncoding sequences will also help to interpret the importance of mutations that have been detected within these regions, including in isolates of the current outbreak. PMID:26656691

  5. Transcriptional Regulation in Ebola Virus: Effects of Gene Border Structure and Regulatory Elements on Gene Expression and Polymerase Scanning Behavior.

    PubMed

    Brauburger, Kristina; Boehmann, Yannik; Krähling, Verena; Mühlberger, Elke

    2016-02-15

    The highly pathogenic Ebola virus (EBOV) has a nonsegmented negative-strand (NNS) RNA genome containing seven genes. The viral genes either are separated by intergenic regions (IRs) of variable length or overlap. The structure of the EBOV gene overlaps is conserved throughout all filovirus genomes and is distinct from that of the overlaps found in other NNS RNA viruses. Here, we analyzed how diverse gene borders and noncoding regions surrounding the gene borders influence transcript levels and govern polymerase behavior during viral transcription. Transcription of overlapping genes in EBOV bicistronic minigenomes followed the stop-start mechanism, similar to that followed by IR-containing gene borders. When the gene overlaps were extended, the EBOV polymerase was able to scan the template in an upstream direction. This polymerase feature seems to be generally conserved among NNS RNA virus polymerases. Analysis of IR-containing gene borders showed that the IR sequence plays only a minor role in transcription regulation. Changes in IR length were generally well tolerated, but specific IR lengths led to a strong decrease in downstream gene expression. Correlation analysis revealed that these effects were largely independent of the surrounding gene borders. Each EBOV gene contains exceptionally long untranslated regions (UTRs) flanking the open reading frame. Our data suggest that the UTRs adjacent to the gene borders are the main regulators of transcript levels. A highly complex interplay between the different cis-acting elements to modulate transcription was revealed for specific combinations of IRs and UTRs, emphasizing the importance of the noncoding regions in EBOV gene expression control. Our data extend those from previous analyses investigating the implication of noncoding regions at the EBOV gene borders for gene expression control. We show that EBOV transcription is regulated in a highly complex yet not easily predictable manner by a set of interacting cis-active elements. These findings are important not only for the design of recombinant filoviruses but also for the design of other replicon systems widely used as surrogate systems to study the filovirus replication cycle under low biosafety levels. Insights into the complex regulation of EBOV transcription conveyed by noncoding sequences will also help to interpret the importance of mutations that have been detected within these regions, including in isolates of the current outbreak. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  6. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  7. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  8. Controlling basins of attraction in a neural network-based telemetry monitor

    NASA Technical Reports Server (NTRS)

    Bell, Benjamin; Eilbert, James L.

    1988-01-01

    The size of the basins of attraction around fixed points in recurrent neural nets (NNs) can be modified by a training process. Controlling these attractive regions by presenting training data with various amount of noise added to the prototype signal vectors is discussed. Application of this technique to signal processing results in a classification system whose sensitivity can be controlled. This new technique is applied to the classification of temporal sequences in telemetry data.

  9. Research on the New Nursery School. Part I, A Summary of the Evaluation of the Experimental Program for Deprived Children at the New Nursery School Using Some Experimental Measures. Interim Report.

    ERIC Educational Resources Information Center

    Nimnicht, Glen; And Others

    The New Nursery School (NNS) program was set up to help 3- and 4-year-old, Spanish-surnamed, environmentally deprived children. The objectives set were (1) to improve self-image, (2) to increase perceptual acuity, (3) to improve language ability, and (4) to improve problem-solving and concept-formation skills. The school is organized as an…

  10. Artificial sweetener saccharin disrupts intestinal epithelial cells' barrier function in vitro.

    PubMed

    Santos, P S; Caria, C R P; Gotardo, E M F; Ribeiro, M L; Pedrazzoli, J; Gambero, A

    2018-06-25

    Consumption of non-nutritive sweeteners (NNS) is a dietary practice used by those who wish to lose weight or by patients on a sugar-restricted diet such as those with DM2. Although these substances are safe, possible biological interactions with the digestive tract, particularly in relation to intestinal permeability, have not been studied. Thus, the current work sought to investigate the action of different NNS on intestinal permeability using an in vitro Caco-2 cell model. Caco-2 cells were incubated with acesulfame K, aspartame, saccharin, or sucralose at equimolar concentrations. Acesulfame K, aspartame, and sucralose did not disrupt monolayer integrity in the cells. However, saccharin increased paracellular permeability and decreased transepithelial electrical resistance (TEER) via a non-cytotoxic mechanism. The levels of the tight junction protein claudin-1 were reduced in Caco-2 cells that had previously been exposed to saccharin. The inhibition of nuclear factor-κB (NF-κB) was able to prevent the reduction in TEER induced by saccharin treatment. Thalidomide, as an inhibitor of ubiquitin ligase, was able to prevent the decrease in claudin-1 protein expression and the TEER reduction in Caco-2 cells. Saccharin disrupts monolayer integrity and alters paracellular permeability in a Caco-2 cell monolayer model, via a mechanism involving NF-κB activation, resulting in the ubiquitination of the tight junction protein claudin-1. Saccharin consumption may potentially alter the intestinal integrity in humans.

  11. The discriminatory capability of existing scores to predict advanced colorectal neoplasia: a prospective colonoscopy study of 5,899 screening participants.

    PubMed

    Wong, Martin C S; Ching, Jessica Y L; Ng, Simpson; Lam, Thomas Y T; Luk, Arthur K C; Wong, Sunny H; Ng, Siew C; Ng, Simon S M; Wu, Justin C Y; Chan, Francis K L; Sung, Joseph J Y

    2016-02-03

    We evaluated the performance of seven existing risk scoring systems in predicting advanced colorectal neoplasia in an asymptomatic Chinese cohort. We prospectively recruited 5,899 Chinese subjects aged 50-70 years in a colonoscopy screening programme(2008-2014). Scoring systems under evaluation included two scoring tools from the US; one each from Spain, Germany, and Poland; the Korean Colorectal Screening(KCS) scores; and the modified Asia Pacific Colorectal Screening(APCS) scores. The c-statistics, sensitivity, specificity, positive predictive values(PPVs), and negative predictive values(NPVs) of these systems were evaluated. The resources required were estimated based on the Number Needed to Screen(NNS) and the Number Needed to Refer for colonoscopy(NNR). Advanced neoplasia was detected in 364 (6.2%) subjects. The German system referred the least proportion of subjects (11.2%) for colonoscopy, whilst the KCS scoring system referred the highest (27.4%). The c-statistics of all systems ranged from 0.56-0.65, with sensitivities ranging from 0.04-0.44 and specificities from 0.74-0.99. The modified APCS scoring system had the highest c-statistics (0.65, 95% C.I. 0.58-0.72). The NNS (12-19) and NNR (5-10) were similar among the scoring systems. The existing scoring systems have variable capability to predict advanced neoplasia among asymptomatic Chinese subjects, and further external validation should be performed.

  12. Energy and nutrient intake in preschool and school age Mexican children: National Nutrition Survey 1999.

    PubMed

    Barquera, Simón; Rivera, Juan A; Safdie, Margarita; Flores, Mario; Campos-Nonato, Ismael; Campirano, Fabricio

    2003-01-01

    To estimate energy and nutrient intake and adequacy in preschool and school age Mexican children, using the National Nutrition Survey 1999 (NNS-1999). Twenty four-h dietary recalls from pre-school (n = 1,309) and school (n = 2,611) children obtained from a representative sub-sample of the NNS-1999 were analyzed. Intakes and adequacies were estimated and compared across four regions, socio-economic strata, and between urban and rural areas, and indigenous vs. non-indigenous children. Median energy intake in pre-school children was 949 kcal and in school children 1,377 kcal, with adequacies < 70% for both groups. Protein adequacy was > 150% in both age groups. The North and Mexico City regions had the highest fat intake and the lowest fiber intake. Children in the South region, indigenous children, and those in the lowest socio-economic stratum had higher fiber and carbohydrate intakes and the lowest fat intake. These children also showed the highest risks of inadequacies for vitamin A, vitamin C, folate, iron, zinc and calcium. Mexico is experiencing a nutrition transition with internal inequalities across regions and socio-economic strata. Food policy must account for these differences in order to optimize resources directed at social programs. The English version of this paper is available too at: http://www.insp.mx/salud/index.html.

  13. Effects of Oral Stimulus Frequency Spectra on the Development of Non-nutritive Suck in Preterm Infants with Respiratory Distress Syndrome or Chronic Lung Disease, and Preterm Infants of Diabetic Mothers

    PubMed Central

    Barlow, SM; Lee, Jaehoon; Wang, Jingyan; Oder, Austin; Oh, Hyuntaek; Hall, Sue; Knox, Kendi; Weatherstone, Kathleen; Thompson, Diane

    2013-01-01

    The precocial nature of orofacial sensorimotor control underscores the biological importance of establishing ororythmic activity in human infants. The purpose of this study was to assess the effects of comparable doses of three forms of orosensory experience, including a low-velocity spectrally reduced orocutaneous stimulus (NT1), a high-velocity broad spectrum orocutaneous stimulus (NT2), and a SHAM stimulus consisting of a blind pacifier. Each orosensory experience condition was paired with gavage feedings 3x/day for 10 days in the neonatal intensive care unit (NICU). Four groups of preterm infants (N=214), including those with respiratory distress syndrome (RDS), chronic lung disease (CLD), infants of diabetic mothers (IDM), and healthy controls (HI) were randomized to the type of orosensory condition. Mixed modeling, adjusted for gender, gestational age, postmenstrual age, and birth weight, demonstrated the most significant gains in non-nutritive suck (NNS) development among CLD infants who were treated with the NT2 stimulus, with smaller gains realized among RDS and IDM infants. The broader spectrum of the NT2 stimulus maps closely to known response properties of mechanoreceptors in lip, tongue, and oral mucosa and is more effective in promoting NNS development among preterm infants with impaired oromotor function compared to the low-velocity, spectrally reduced NT1 orosensory stimulus. PMID:25018662

  14. Data-Driven H∞ Control for Nonlinear Distributed Parameter Systems.

    PubMed

    Luo, Biao; Huang, Tingwen; Wu, Huai-Ning; Yang, Xiong

    2015-11-01

    The data-driven H∞ control problem of nonlinear distributed parameter systems is considered in this paper. An off-policy learning method is developed to learn the H∞ control policy from real system data rather than the mathematical model. First, Karhunen-Loève decomposition is used to compute the empirical eigenfunctions, which are then employed to derive a reduced-order model (ROM) of slow subsystem based on the singular perturbation theory. The H∞ control problem is reformulated based on the ROM, which can be transformed to solve the Hamilton-Jacobi-Isaacs (HJI) equation, theoretically. To learn the solution of the HJI equation from real system data, a data-driven off-policy learning approach is proposed based on the simultaneous policy update algorithm and its convergence is proved. For implementation purpose, a neural network (NN)- based action-critic structure is developed, where a critic NN and two action NNs are employed to approximate the value function, control, and disturbance policies, respectively. Subsequently, a least-square NN weight-tuning rule is derived with the method of weighted residuals. Finally, the developed data-driven off-policy learning approach is applied to a nonlinear diffusion-reaction process, and the obtained results demonstrate its effectiveness.

  15. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  16. Neural Networks Technique for Filling Gaps in Satellite Measurements: Application to Ocean Color Observations.

    PubMed

    Krasnopolsky, Vladimir; Nadiga, Sudhir; Mehra, Avichal; Bayler, Eric; Behringer, David

    2016-01-01

    A neural network (NN) technique to fill gaps in satellite data is introduced, linking satellite-derived fields of interest with other satellites and in situ physical observations. Satellite-derived "ocean color" (OC) data are used in this study because OC variability is primarily driven by biological processes related and correlated in complex, nonlinear relationships with the physical processes of the upper ocean. Specifically, ocean color chlorophyll-a fields from NOAA's operational Visible Imaging Infrared Radiometer Suite (VIIRS) are used, as well as NOAA and NASA ocean surface and upper-ocean observations employed--signatures of upper-ocean dynamics. An NN transfer function is trained, using global data for two years (2012 and 2013), and tested on independent data for 2014. To reduce the impact of noise in the data and to calculate a stable NN Jacobian for sensitivity studies, an ensemble of NNs with different weights is constructed and compared with a single NN. The impact of the NN training period on the NN's generalization ability is evaluated. The NN technique provides an accurate and computationally cheap method for filling in gaps in satellite ocean color observation fields and time series.

  17. Neural Networks Technique for Filling Gaps in Satellite Measurements: Application to Ocean Color Observations

    PubMed Central

    Nadiga, Sudhir; Mehra, Avichal; Bayler, Eric; Behringer, David

    2016-01-01

    A neural network (NN) technique to fill gaps in satellite data is introduced, linking satellite-derived fields of interest with other satellites and in situ physical observations. Satellite-derived “ocean color” (OC) data are used in this study because OC variability is primarily driven by biological processes related and correlated in complex, nonlinear relationships with the physical processes of the upper ocean. Specifically, ocean color chlorophyll-a fields from NOAA's operational Visible Imaging Infrared Radiometer Suite (VIIRS) are used, as well as NOAA and NASA ocean surface and upper-ocean observations employed—signatures of upper-ocean dynamics. An NN transfer function is trained, using global data for two years (2012 and 2013), and tested on independent data for 2014. To reduce the impact of noise in the data and to calculate a stable NN Jacobian for sensitivity studies, an ensemble of NNs with different weights is constructed and compared with a single NN. The impact of the NN training period on the NN's generalization ability is evaluated. The NN technique provides an accurate and computationally cheap method for filling in gaps in satellite ocean color observation fields and time series. PMID:26819586

  18. Near Optimal Event-Triggered Control of Nonlinear Discrete-Time Systems Using Neurodynamic Programming.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-09-01

    This paper presents an event-triggered near optimal control of uncertain nonlinear discrete-time systems. Event-driven neurodynamic programming (NDP) is utilized to design the control policy. A neural network (NN)-based identifier, with event-based state and input vectors, is utilized to learn the system dynamics. An actor-critic framework is used to learn the cost function and the optimal control input. The NN weights of the identifier, the critic, and the actor NNs are tuned aperiodically once every triggered instant. An adaptive event-trigger condition to decide the trigger instants is derived. Thus, a suitable number of events are generated to ensure a desired accuracy of approximation. A near optimal performance is achieved without using value and/or policy iterations. A detailed analysis of nontrivial inter-event times with an explicit formula to show the reduction in computation is also derived. The Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the ultimate boundedness of the closed-loop system. The simulation results are included to verify the performance of the controller. The net result is the development of event-driven NDP.

  19. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  20. Flying a College on the Computer. The Use of the Computer in Planning Buildings.

    ERIC Educational Resources Information Center

    Saint Louis Community Coll., MO.

    Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…

  1. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  2. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  3. High Resolution Modeling of Coastal Inundation: User Requirements and Current Practice, A Navy Perspective

    DTIC Science & Technology

    2007-07-27

    often located at Within this paper the general role of in- period. These changes in the operational ret- the cusp of the land- sea interface where algo...vegetation, and sediment proper- summarizes the user requirements set forth NNS060820-01, www.news.navy.mil). Fur- ties. Since the Navy operational...role are dis- modeling the circulation and pollutant trans- bal, regional, and local systems with two pri- cussed below, port in estuarine and coastal

  4. Design of fuzzy system by NNs and realization of adaptability

    NASA Technical Reports Server (NTRS)

    Takagi, Hideyuki

    1993-01-01

    The issue of designing and tuning fuzzy membership functions by neural networks (NN's) was started by NN-driven Fuzzy Reasoning in 1988. NN-driven fuzzy reasoning involves a NN embedded in the fuzzy system which generates membership values. In conventional fuzzy system design, the membership functions are hand-crafted by trial and error for each input variable. In contrast, NN-driven fuzzy reasoning considers several variables simultaneously and can design a multidimensional, nonlinear membership function for the entire subspace.

  5. Instrument Systems Analysis and Verification Facility (ISAVF) users guide

    NASA Technical Reports Server (NTRS)

    Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.

    1985-01-01

    The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.

  6. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  7. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  8. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  9. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...

  10. Refined sugar intake in Australian children.

    PubMed

    Somerset, Shawn M

    2003-12-01

    To estimate the intake of refined sugar in Australian children and adolescents, aged 2-18 years. Foods contributing to total sugar intake were identified using data from the National Nutrition Survey 1995 (NNS95), the most recent national dietary survey of the Australian population. The top 100 foods represented means of 85% (range 79-91%) and 82% (range 78-85%) of total sugar intake for boys and girls, respectively. Using published Australian food composition data (NUTTAB95), the proportion of total sugar being refined sugar was estimated for each food. Where published food composition data were not available, calculations from ingredients and manufacturer's information were used. The NNS95 assessed the dietary intake of a random sample of the Australian population, aged 2-18 years (n=3007). Mean daily intakes of refined sugar ranged from 26.9 to 78.3 g for 2-18-year-old girls, representing 6.6-14.8% of total energy intake. Corresponding figures for boys were 27.0 to 81.6 g and 8.0-14.0%, respectively. Of the 10 highest sources of refined sugar for each age group, sweetened beverages, especially cola-type beverages, were the most prominent. Refined sugar is an important contributor to dietary energy in Australian children. Sweetened beverages such as soft drinks and cordials were substantial sources of refined sugar and represent a potential target for campaigns to reduce refined sugar intake. Better access to information on the amounts of sugar added to processed food is essential for appropriate monitoring of this important energy source.

  11. Local TEC modelling and forecasting using neural networks

    NASA Astrophysics Data System (ADS)

    Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.

    2018-07-01

    Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, and satellite navigation. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as drivers previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.

  12. Altered functional connectivity links in neuroleptic-naïve and neuroleptic-treated patients with schizophrenia, and their relation to symptoms including volition

    PubMed Central

    Pu, Weidan; Rolls, Edmund T.; Guo, Shuixia; Liu, Haihong; Yu, Yun; Xue, Zhimin; Feng, Jianfeng; Liu, Zhening

    2014-01-01

    In order to analyze functional connectivity in untreated and treated patients with schizophrenia, resting-state fMRI data were obtained for whole-brain functional connectivity analysis from 22 first-episode neuroleptic-naïve schizophrenia (NNS), 61 first-episode neuroleptic-treated schizophrenia (NTS) patients, and 60 healthy controls (HC). Reductions were found in untreated and treated patients in the functional connectivity between the posterior cingulate gyrus and precuneus, and this was correlated with the reduction in volition from the Positive and Negative Symptoms Scale (PANSS), that is in the willful initiation, sustenance, and control of thoughts, behavior, movements, and speech, and with the general and negative symptoms. In addition in both patient groups interhemispheric functional connectivity was weaker between the orbitofrontal cortex, amygdala and temporal pole. These functional connectivity changes and the related symptoms were not treated by the neuroleptics. Differences between the patient groups were that there were more strong functional connectivity links in the NNS patients (including in hippocampal, frontal, and striatal circuits) than in the NTS patients. These findings with a whole brain analysis in untreated and treated patients with schizophrenia provide evidence on some of the brain regions implicated in the volitional, other general, and negative symptoms, of schizophrenia that are not treated by neuroleptics so have implications for the development of other treatments; and provide evidence on some brain systems in which neuroleptics do alter the functional connectivity. PMID:25389520

  13. Breastfeeding and maternal employment: results from three national nutritional surveys in Mexico.

    PubMed

    Rivera-Pasquel, Marta; Escobar-Zaragoza, Leticia; González de Cosío, Teresita

    2015-05-01

    To evaluate the association between maternal employment and breastfeeding (both duration and status) in Mexican mothers using data from three National Health and Nutrition Surveys conducted in 1999, 2006 and 2012. We analyzed data from the 1999 National Nutrition Survey, the 2006 National Nutrition and Health Survey, and the 2012 National Nutrition and Health Survey (NNS-1999, NHNS-2006 and NHNS-2012) on 5,385 mothers aged 12-49 years, with infants under 1 year. Multivariate logistic regression models were used to analyze the association between breastfeeding and maternal employment adjusted for maternal and infant's socio-demographic covariates. Maternal formal employment was negatively associated with breastfeeding in Mexican mothers with infants under 1 year. Formally employed mothers were 20 % less likely to breastfeed compared to non-formally employed mothers and 27 % less likely to breastfeed compared to unemployed mothers. Difference in median duration of breastfeeding between formally employed and unemployed mothers was 5.7 months for NNS-1999, 4.7 months for NNHS-2006 and 6.7 months for NNHS-2012 respectively (p < 0.05). In NHNS-2006 and NHNS-2012, health care access was associated with longer breastfeeding duration. Maternal employment has been negatively associated with breastfeeding in Mexican mothers of <1 year infants at least for the last 15 years. For Mexicans involved in policy design, implementation or modification, these data might offer robust evidence on this negative association, and can be used confidently as basis for conceiving a more just legislation for working lactating women.

  14. The trends in total energy, macronutrients and sodium intake among Japanese: findings from the 1995-2016 National Health and Nutrition Survey.

    PubMed

    Saito, Aki; Imai, Shino; Htun, Nay Chi; Okada, Emiko; Yoshita, Katsushi; Yoshiike, Nobuo; Takimoto, Hidemi

    2018-06-04

    Monitoring nutritional status of the population is essential in the development and evaluation of national or local health policies. In this study, we aimed to demonstrate analysis on the trends in dietary intake of energy and macronutrients, as well as Na, in Japanese population using the data of series of cross-sectional national surveys - the National Nutrition Survey (NNS) and the National Health Nutrition Survey (NHNS) - during the period from 1995 to 2016. The NNS and NHNS participants aged 20-79 years were included in the analysis. Dietary intake was estimated using 1-d household-based dietary record. The trend in total energy intake, energy intake from macronutrients (fat and protein), Na intake and energy-adjusted Na intake were analysed using regression models adjusted to 2010 age distribution and anthropometry status. A total of 94 270 men and 107 890 women were included the analysis. Total energy intake showed a decreasing trend in both men and women. Similarly, energy intake from protein decreased, but energy intake (%) from fat increased in both sexes. Energy-adjusted Na intake showed a decreasing trend in both men and women. This study identified the decrease in total energy intake and energy intake from protein, whereas there were inverse trends in energy intake from fat among Japanese adults. Continued monitoring of trends in dietary intake will be needed, and there should be efforts to increase the accuracy of current survey procedures.

  15. Quality-of-life effects of prostate-specific antigen screening

    PubMed Central

    Heijnsdijk, EAM; Wever, EM; Auvinen, A; Hugosson, J; Ciatto, S; Nelen, V; Kwiatkowski, M; Villers, A; Páez, A; Moss, SM; Zappa, M; Tammela, TLJ; Mäkinen, T; Carlsson, S; Korfage, IJ; Essink-Bot, ML; Otto, SJ; Draisma, G; Bangma, CH; Roobol, MJ; Schröder, FH; de Koning, HJ

    2016-01-01

    Background The European Randomized Study of Screening for Prostate Cancer (ERSPC) reported a 29% prostate cancer mortality reduction among screened men after 11 years. However, it is uncertain to what extent harms from overdiagnosis and treatment on quality of life counterbalance this benefit. Methods Based on ERSPC follow-up data, we used micro-simulation modeling (MISCAN) to predict the number of prostate cancers, treatments, deaths and quality-adjusted life-years (QALYs) gained following the introduction of screening. Various screening strategies, efficacies, and quality of life assumptions were modeled. Results Per 1,000 men of all ages followed for their entire lifespan we predicted for annual screening from age 55–69 years: 9 fewer deaths due to prostate cancer (28% reduction), 14 fewer men receiving palliative therapy (35% reduction), and 73 life-years gained (average 8.4 years per prostate cancer death avoided). QALYs gained were 56 (range: −21, 97), a reduction of 23% from unadjusted life-years gained. The number needed to screen (NNS) was 98 and number needed to detect (NND) 5. Also inviting men aged 70–74 resulted in more life-years (82) but similar QALYs (56). Conclusions Although NNS and NND are more favorable than previously calculated, the benefit of PSA screening is diminished by loss of QALYs, that is dependent primarily on post-diagnosis long-term effects. Longer follow-up data from both the ERSPC and quality of life are essential before making universal recommendations regarding screening. PMID:22894572

  16. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  17. Phase transition of social learning collectives and the echo chamber.

    PubMed

    Mori, Shintaro; Nakayama, Kazuaki; Hisakado, Masato

    2016-11-01

    We study a simple model for social learning agents in a restless multiarmed bandit. There are N agents, and the bandit has M good arms that change to bad with the probability q_{c}/N. If the agents do not know a good arm, they look for it by a random search (with the success probability q_{I}) or copy the information of other agents' good arms (with the success probability q_{O}) with probabilities 1-p or p, respectively. The distribution of the agents in M good arms obeys the Yule distribution with the power-law exponent 1+γ in the limit N,M→∞, and γ=1+(1-p)q_{I}/pq_{O}. The system shows a phase transition at p_{c}=q_{I}/q_{I}+q_{o}. For pp_{c}), the variance of N_{1} per agent is finite (diverges as ∝N^{2-γ} with N). There is a threshold value N_{s} for the system size that scales as lnN_{s}∝1/(γ-1). The expected value of the number of the agents with a good arm N_{1} increases with p for N>N_{s}. For p>p_{c} and N

  18. Behavioral plasticity through the modulation of switch neurons.

    PubMed

    Vassiliades, Vassilis; Christodoulou, Chris

    2016-02-01

    A central question in artificial intelligence is how to design agents capable of switching between different behaviors in response to environmental changes. Taking inspiration from neuroscience, we address this problem by utilizing artificial neural networks (NNs) as agent controllers, and mechanisms such as neuromodulation and synaptic gating. The novel aspect of this work is the introduction of a type of artificial neuron we call "switch neuron". A switch neuron regulates the flow of information in NNs by selectively gating all but one of its incoming synaptic connections, effectively allowing only one signal to propagate forward. The allowed connection is determined by the switch neuron's level of modulatory activation which is affected by modulatory signals, such as signals that encode some information about the reward received by the agent. An important aspect of the switch neuron is that it can be used in appropriate "switch modules" in order to modulate other switch neurons. As we show, the introduction of the switch modules enables the creation of sequences of gating events. This is achieved through the design of a modulatory pathway capable of exploring in a principled manner all permutations of the connections arriving on the switch neurons. We test the model by presenting appropriate architectures in nonstationary binary association problems and T-maze tasks. The results show that for all tasks, the switch neuron architectures generate optimal adaptive behaviors, providing evidence that the switch neuron model could be a valuable tool in simulations where behavioral plasticity is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Local TEC Modelling and Forecasting using Neural Networks

    NASA Astrophysics Data System (ADS)

    Tebabal, A.; Radicella, S. M.; Nigussie, M.; Damtie, B.; Nava, B.; Yizengaw, E.

    2017-12-01

    Abstract Modelling the Earth's ionospheric characteristics is the focal task for the ionospheric community to mitigate its effect on the radio communication, satellite navigation and technologies. However, several aspects of modelling are still challenging, for example, the storm time characteristics. This paper presents modelling efforts of TEC taking into account solar and geomagnetic activity, time of the day and day of the year using neural networks (NNs) modelling technique. The NNs have been designed with GPS-TEC measured data from low and mid-latitude GPS stations. The training was conducted using the data obtained for the period from 2011 to 2014. The model prediction accuracy was evaluated using data of year 2015. The model results show that diurnal and seasonal trend of the GPS-TEC is well reproduced by the model for the two stations. The seasonal characteristics of GPS-TEC is compared with NN and NeQuick 2 models prediction when the latter one is driven by the monthly average value of solar flux. It is found that NN model performs better than the corresponding NeQuick 2 model for low latitude region. For the mid-latitude both NN and NeQuick 2 models reproduce the average characteristics of TEC variability quite successfully. An attempt of one day ahead forecast of TEC at the two locations has been made by introducing as driver previous day solar flux and geomagnetic index values. The results show that a reasonable day ahead forecast of local TEC can be achieved.

  20. Future Computer Requirements for Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  1. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  2. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  3. 2014 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  4. 2015 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  5. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  6. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  7. On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Alunni, Antonella I.

    2012-01-01

    This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.

  8. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  9. 2016 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  10. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  11. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  12. Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community

    NASA Astrophysics Data System (ADS)

    Ball, John E.; Anderson, Derek T.; Chan, Chee Seng

    2017-10-01

    In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.

  13. Have computers, will travel: providing on-site library instruction in rural health facilities using a portable computer lab.

    PubMed

    Neilson, Christine J

    2010-01-01

    The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.

  14. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  15. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  16. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  17. Leaf Area Index Estimation Using Chinese GF-1 Wide Field View Data in an Agriculture Region.

    PubMed

    Wei, Xiangqin; Gu, Xingfa; Meng, Qingyan; Yu, Tao; Zhou, Xiang; Wei, Zheng; Jia, Kun; Wang, Chunmei

    2017-07-08

    Leaf area index (LAI) is an important vegetation parameter that characterizes leaf density and canopy structure, and plays an important role in global change study, land surface process simulation and agriculture monitoring. The wide field view (WFV) sensor on board the Chinese GF-1 satellite can acquire multi-spectral data with decametric spatial resolution, high temporal resolution and wide coverage, which are valuable data sources for dynamic monitoring of LAI. Therefore, an automatic LAI estimation algorithm for GF-1 WFV data was developed based on the radiative transfer model and LAI estimation accuracy of the developed algorithm was assessed in an agriculture region with maize as the dominated crop type. The radiative transfer model was firstly used to simulate the physical relationship between canopy reflectance and LAI under different soil and vegetation conditions, and then the training sample dataset was formed. Then, neural networks (NNs) were used to develop the LAI estimation algorithm using the training sample dataset. Green, red and near-infrared band reflectances of GF-1 WFV data were used as the input variables of the NNs, as well as the corresponding LAI was the output variable. The validation results using field LAI measurements in the agriculture region indicated that the LAI estimation algorithm could achieve satisfactory results (such as R² = 0.818, RMSE = 0.50). In addition, the developed LAI estimation algorithm had potential to operationally generate LAI datasets using GF-1 WFV land surface reflectance data, which could provide high spatial and temporal resolution LAI data for agriculture, ecosystem and environmental management researches.

  18. The effect of music-reinforced nonnutritive sucking on state of preterm, low birthweight infants experiencing heelstick.

    PubMed

    Whipple, Jennifer

    2008-01-01

    This study examined the physiologic and behavioral effects of music-reinforced nonnutritive sucking (NNS) for preterm, low birthweight (LBW) infants experiencing heelstick. Subjects were 60 infants, age 32 to 37 weeks post conceptional age in a neonatal intensive care unit. Infants were randomly assigned to one of three treatment groups: pacifier-activated lullaby (PAL), pacifier-only, and no-contact. Experimental infants were provided the Sondrex PAL System, which plays music contingent on infant sucking. Pacifier-only infants did not receive music reinforcement for sucking, and no-contact infants were not provided a pacifier or music at any point during the procedure. Stress level and behavior state were assessed continuously and heart, respiratory, and oxygen saturation rates were recorded at 15-second intervals for all infants. Most physiologic data results were inconclusive. However, analysis of behavior state and stress level revealed the following significant differences for the PAL and pacifier-only groups compared to the no-contact group, all of which were greatest between the PAL and no-contact groups: lower during-heelstick behavior state means, less time in undesirable behavior states, lower during- and post-heelstick stress level means, and smaller behavior state and stress level differences between intervals. In addition, the PAL group had a significantly lower pre-heelstick stress level mean than the no-contact group. Behavior state and stress level were also more stable across time for the PAL group than the other groups, and patterns of changes in oxygen saturation, behavior state, and stress level indicate that music-reinforced NNS may facilitate return to homeostasis.

  19. Screening diabetes in tuberculosis patients in eastern rural China: a community-based cross-sectional study.

    PubMed

    Zhao, Q; Xiao, X; Lu, W; Qiu, L-X; Zhou, C-M; Jiang, W-L; Xu, B; Diwan, V

    2016-10-01

    To understand the prevalence of diabetes mellitus (DM) and tuberculosis (TB) comorbidity in rural China and to identify factors associated with TB-DM comorbidity and screening efficacy. A community-based cross-sectional study was carried out in four counties in eastern rural China. All TB patients newly registered from April 2013 to March 2014 were screened for DM using fasting blood glucose (FBG). Screening-positive patients were further examined using glycosylated haemoglobin A1C (HbA1c). Ninety-seven (7.7%) of the 1252 recruited TB patients had DM, 44 (45.4%) of whom were newly diagnosed. The DM-TB patients were significantly older than non-diabetics (mean age 57 ± 13 years vs. 49 ± 19 years, P < 0.001). The risk of DM-TB was higher in patients aged >40 years (OR 3.039) and in overweight patients (OR 2.595). The number needed to screen (NNS) among TB patients to identify one case of DM was 12.97. The NNS to identify one new DM patient (27.4) was lower in participants aged >40 years (20.5), those who were illiterate (19.9), those with a family history of DM (9.3), those with missing bacille Calmette-Guérin vaccination (11.3), current smokers (14.2) and those with body mass index >24 (11.4). Regular DM screening in TB patients is practical in rural China. Better efficacy of DM-TB detection could be obtained by screening high-risk populations, such as overweight TB patients or those with a family history of DM.

  20. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  1. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  2. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  3. Neural networks vs Gaussian process regression for representing potential energy surfaces: A comparative study of fit quality and vibrational spectrum accuracy

    NASA Astrophysics Data System (ADS)

    Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei

    2018-06-01

    For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.

  4. Stability and Hopf bifurcation in a simplified BAM neural network with two time delays.

    PubMed

    Cao, Jinde; Xiao, Min

    2007-03-01

    Various local periodic solutions may represent different classes of storage patterns or memory patterns, and arise from the different equilibrium points of neural networks (NNs) by applying Hopf bifurcation technique. In this paper, a bidirectional associative memory NN with four neurons and multiple delays is considered. By applying the normal form theory and the center manifold theorem, analysis of its linear stability and Hopf bifurcation is performed. An algorithm is worked out for determining the direction and stability of the bifurcated periodic solutions. Numerical simulation results supporting the theoretical analysis are also given.

  5. A subset polynomial neural networks approach for breast cancer diagnosis.

    PubMed

    O'Neill, T J; Penm, Jack; Penm, Jonathan

    2007-01-01

    Breast cancer is a very common and serious cancer for women that is diagnosed in one of every eight Australian women before the age of 85. The conventional method of breast cancer diagnosis is mammography. However, mammography has been reported to have poor diagnostic capability. In this paper we have used subset polynomial neural network techniques in conjunction with fine needle aspiration cytology to undertake this difficult task of predicting breast cancer. The successful findings indicate that adoption of NNs is likely to lead to increased survival of women with breast cancer, improved electronic healthcare, and enhanced quality of life.

  6. Poster - 25: Neutron Spectral Measurements around a Scanning Proton Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kildea, John; Enger, Shirin; Maglieri, Robert

    We describe the measurements of neutron spectra that we undertook around a scanning proton beam at the Skandion proton therapy clinic in Uppsala, Sweden. Measurements were undertaken using an extended energy range Nested Neutron Spectrometer (NNS, Detec Inc., Gatineau, QC) operated in pulsed and current mode. Spectra were measured as a function of location in the treatment room and for various Bragg peak depths. Our preliminary unfolded data clearly show the direct, evaporation and thermal neutron peaks and we can show the effect on the neutron spectrum of a water phantom in the primary proton beam.

  7. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  8. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  9. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  10. LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2018-01-24

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  11. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yelick, Kathy

    2012-02-02

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  12. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2017-12-09

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  13. Ethics and the 7 `P`s` of computer use policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, T.J.; Voss, R.B.

    1994-12-31

    A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.

  14. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and simulated camera optical distortions. In particular, we examine two critical cases: (1) classification of multiple closely spaced signatures that are difficult to separate using distance measures, and (2) classification of materials in simulated hyperspectral images of spaceborne satellites. In each case, test data are derived from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to classical NN based techniques.

  15. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  16. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    PubMed

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  17. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  18. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  19. The H2A-H2B dimeric kinetic intermediate is stabilized by widespread hydrophobic burial with few fully native interactions.

    PubMed

    Guyett, Paul J; Gloss, Lisa M

    2012-01-20

    The H2A-H2B histone heterodimer folds via monomeric and dimeric kinetic intermediates. Within ∼5 ms, the H2A and H2B polypeptides associate in a nearly diffusion limited reaction to form a dimeric ensemble, denoted I₂ and I₂*, the latter being a subpopulation characterized by a higher content of nonnative structure (NNS). The I₂ ensemble folds to the native heterodimer, N₂, through an observable, first-order kinetic phase. To determine the regions of structure in the I₂ ensemble, we characterized 26 Ala mutants of buried hydrophobic residues, spanning the three helices of the canonical histone folds of H2A and H2B and the H2B C-terminal helix. All but one targeted residue contributed significantly to the stability of I₂, the transition state and N₂; however, only residues in the hydrophobic core of the dimer interface perturbed the I₂* population. Destabilization of I₂* correlated with slower folding rates, implying that NNS is not a kinetic trap but rather accelerates folding. The pattern of Φ values indicated that residues forming intramolecular interactions in the peripheral helices contributed similar stability to I₂ and N₂, but residues involved in intermolecular interactions in the hydrophobic core are only partially folded in I₂. These findings suggest a dimerize-then-rearrange model. Residues throughout the histone fold contribute to the stability of I₂, but after the rapid dimerization reaction, the hydrophobic core of the dimer interface has few fully native interactions. In the transition state leading to N₂, more native-like interactions are developed and nonnative interactions are rearranged. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Quantifying the causes of differences in tropospheric OH within global models

    NASA Astrophysics Data System (ADS)

    Nicely, Julie M.; Salawitch, Ross J.; Canty, Timothy; Anderson, Daniel C.; Arnold, Steve R.; Chipperfield, Martyn P.; Emmons, Louisa K.; Flemming, Johannes; Huijnen, Vincent; Kinnison, Douglas E.; Lamarque, Jean-François; Mao, Jingqiu; Monks, Sarah A.; Steenrod, Stephen D.; Tilmes, Simone; Turquety, Solene

    2017-02-01

    The hydroxyl radical (OH) is the primary daytime oxidant in the troposphere and provides the main loss mechanism for many pollutants and greenhouse gases, including methane (CH4). Global mean tropospheric OH differs by as much as 80% among various global models, for reasons that are not well understood. We use neural networks (NNs), trained using archived output from eight chemical transport models (CTMs) that participated in the Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, of Climate, Chemistry, Aerosols and Transport Model Intercomparison Project (POLMIP), to quantify the factors responsible for differences in tropospheric OH and resulting CH4 lifetime (τCH4) between these models. Annual average τCH4, for loss by OH only, ranges from 8.0 to 11.6 years for the eight POLMIP CTMs. The factors driving these differences were quantified by inputting 3-D chemical fields from one CTM into the trained NN of another CTM. Across all CTMs, the largest mean differences in τCH4 (ΔτCH4) result from variations in chemical mechanisms (ΔτCH4 = 0.46 years), the photolysis frequency (J) of O3 → O(1D) (0.31 years), local O3 (0.30 years), and CO (0.23 years). The ΔτCH4 due to CTM differences in NOx (NO + NO2) is relatively low (0.17 years), although large regional variation in OH between the CTMs is attributed to NOx. Differences in isoprene and J(NO2) have negligible overall effect on globally averaged tropospheric OH, although the extent of OH variations due to each factor depends on the model being examined. This study demonstrates that NNs can serve as a useful tool for quantifying why tropospheric OH varies between global models, provided that essential chemical fields are archived.

  1. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  2. Single-base-pair discrimination of terminal mismatches by using oligonucleotide microarrays and neural network analyses

    NASA Technical Reports Server (NTRS)

    Urakawa, Hidetoshi; Noble, Peter A.; El Fantroussi, Said; Kelly, John J.; Stahl, David A.

    2002-01-01

    The effects of single-base-pair near-terminal and terminal mismatches on the dissociation temperature (T(d)) and signal intensity of short DNA duplexes were determined by using oligonucleotide microarrays and neural network (NN) analyses. Two perfect-match probes and 29 probes having a single-base-pair mismatch at positions 1 to 5 from the 5' terminus of the probe were designed to target one of two short sequences representing 16S rRNA. Nonequilibrium dissociation rates (i.e., melting profiles) of all probe-target duplexes were determined simultaneously. Analysis of variance revealed that position of the mismatch, type of mismatch, and formamide concentration significantly affected the T(d) and signal intensity. Increasing the concentration of formamide in the washing buffer decreased the T(d) and signal intensity, and it decreased the variability of the signal. Although T(d)s of probe-target duplexes with mismatches in the first or second position were not significantly different from one another, duplexes with mismatches in the third to fifth positions had significantly lower T(d)s than those with mismatches in the first or second position. The trained NNs predicted the T(d) with high accuracies (R(2) = 0.93). However, the NNs predicted the signal intensity only moderately accurately (R(2) = 0.67), presumably due to increased noise in the signal intensity at low formamide concentrations. Sensitivity analysis revealed that the concentration of formamide explained most (75%) of the variability in T(d)s, followed by position of the mismatch (19%) and type of mismatch (6%). The results suggest that position of the mismatch at or near the 5' terminus plays a greater role in determining the T(d) and signal intensity of duplexes than the type of mismatch.

  3. Development of Methodologies for IV and V of Neural Networks

    NASA Technical Reports Server (NTRS)

    Taylor, Brian; Darrah, Marjorie

    2003-01-01

    Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.

  4. Biomarker-based risk prediction in the community.

    PubMed

    AbouEzzeddine, Omar F; McKie, Paul M; Scott, Christopher G; Rodeheffer, Richard J; Chen, Horng H; Michael Felker, G; Jaffe, Allan S; Burnett, John C; Redfield, Margaret M

    2016-11-01

    Guided by predictive characteristics of cardiovascular biomarkers, we explored the clinical implications of a simulated biomarker-guided heart failure (HF) and major adverse cardiovascular events (MACE) prevention strategy in the community. In a community cohort (n = 1824), the predictive characteristics for HF and MACE of galectin-3 (Gal-3), ST2, high-sensitivity cardiac troponin I (hscTnI), high-sensitivity C-reactive protein (hsCRP), N-terminal pro-brain natriuretic peptide (NT-proBNP) and B-type natriuretic peptide (BNP) were established. We performed number needed to screen (NNS) and treat (NNT) with the intervention analyses according to biomarker screening strategy and intervention efficacy in persons with at least one cardiovascular risk factor. In the entire cohort, for both HF and MACE, the predictive characteristics of NT-proBNP and hscTnI were superior to other biomarkers; alone, in a multimarker model, and adjusting for clinical risk factors. An NT-proBNP-guided preventative intervention with an intervention effect size (4-year hazard ratio for intervention in biomarker positive cohort) of ≤0.7 would reduce the global burden of HF by ≥20% and MACE by ≥15%. From this simulation, the NNS to prevent one HF event or MACE in 4 years would be ≤100 with a NNT to prevent one HF event of ≤20 and one MACE of ≤10. The predictive characteristics of NT-proBNP and hscTnI for HF or MACE in the community are superior to other biomarkers. Biomarker-guided preventative interventions with reasonable efficacy would compare favourably to established preventative interventions. This data provides a framework for biomarker selection which may inform design of biomarker-guided preventative intervention trials. © 2016 The Authors. European Journal of Heart Failure © 2016 European Society of Cardiology.

  5. Evaluation of different methods used to assess disease activity in rheumatoid arthritis: analyses of abatacept clinical trial data

    PubMed Central

    Dougados, M; Schmidely, N; Le Bars, M; Lafosse, C; Schiff, M; Smolen, J S; Aletaha, D; van Riel, P; Wells, G

    2009-01-01

    Objectives: To evaluate different methods of reporting response to treatment or disease status for their ability to discriminate between active therapy and placebo, or to reflect structural progression or patient satisfaction with treatment using an exploratory analysis of the Abatacept in Inadequate Responders to Methotrexate (AIM) trial. Methods: 424 active (abatacept ∼10 mg/kg) and 214 placebo-treated patients with rheumatoid arthritis (RA) were evaluated. Methods of reporting included: (1) response (American College of Rheumatology (ACR) criteria) versus state (disease activity score in 28 joints (DAS28) criteria); (2) stringency (ACR20 vs 50 vs 70; moderate disease activity state (MDAS; DAS28 <5.1) vs low disease activity state (LDAS; DAS28 ⩽3.2) vs DAS28-defined remission (DAS28 <2.6)); (3) time to onset (time to first ACR50/LDAS) and (4) sustainability of ACR50/LDAS for consecutive visits. Methods were assessed according to: (1) discriminatory capacity (number of patients needed to study (NNS)); (2) structural progression (Genant-modified Sharp score) and (3) patient satisfaction with treatment. Positive likelihood ratios (LR) evaluated the ability of the above methods to reflect structural damage and patient satisfaction. Results: MDAS and ACR20 had the highest discriminatory capacity (NNS 49 and 69). Sustained LDAS best reflected no radiographic progression (positive LR ⩾2). More stringent criteria (at least ACR50/LDAS), faster onset (⩽3 months) and sustainability (>3 visits) of ACR50/LDAS best reflected patient satisfaction (positive LR >10). Conclusions: The optimal method for reporting a measure of disease activity may differ depending on the outcome of interest. Time to onset and sustainability can be important factors when evaluating treatment response and disease status in patients with RA. PMID:19074177

  6. --RNA Polymerase II Transcription Attenuation at the Yeast DNA Repair Gene, DEF1, Involves Sen1-Dependent and Polyadenylation Site-Dependent Termination.

    PubMed

    Whalen, Courtney; Tuohy, Christine; Tallo, Thomas; Kaufman, James W; Moore, Claire; Kuehner, Jason N

    2018-04-23

    Termination of RNA Polymerase II (Pol II) activity serves a vital cellular function by separating ubiquitous transcription units and influencing RNA fate and function. In the yeast Saccharomyces cerevisiae , Pol II termination is carried out by cleavage and polyadenylation factor (CPF-CF) and Nrd1-Nab3-Sen1 (NNS) complexes, which operate primarily at mRNA and non-coding RNA genes, respectively. Premature Pol II termination (attenuation) contributes to gene regulation, but there is limited knowledge of its prevalence and biological significance. In particular, it is unclear how much crosstalk occurs between CPF-CF and NNS complexes and how Pol II attenuation is modulated during stress adaptation. In this study, we have identified an attenuator in the DEF1 DNA repair gene, which includes a portion of the 5'-untranslated region (UTR) and upstream open reading frame (ORF). Using a plasmid-based reporter gene system, we conducted a genetic screen of 14 termination mutants and their ability to confer Pol II read-through defects. The DEF1 attenuator behaved as a hybrid terminator, relying heavily on CPF-CF and Sen1 but without Nrd1 and Nab3 involvement. Our genetic selection identified 22 cis -acting point mutations that clustered into four regions, including a polyadenylation site efficiency element that genetically interacts with its cognate binding-protein Hrp1. Outside of the reporter gene context, a DEF1 attenuator mutant increased mRNA and protein expression, exacerbating the toxicity of a constitutively active Def1 protein. Overall, our data support a biologically significant role for transcription attenuation in regulating DEF1 expression, which can be modulated during the DNA damage response. Copyright © 2018, G3: Genes, Genomes, Genetics.

  7. Three-dimensional fusion of spaceborne and ground radar reflectivity data using a neural network-based approach

    NASA Astrophysics Data System (ADS)

    Kou, Leilei; Wang, Zhuihui; Xu, Fen

    2018-03-01

    The spaceborne precipitation radar onboard the Tropical Rainfall Measuring Mission satellite (TRMM PR) can provide good measurement of the vertical structure of reflectivity, while ground radar (GR) has a relatively high horizontal resolution and greater sensitivity. Fusion of TRMM PR and GR reflectivity data may maximize the advantages from both instruments. In this paper, TRMM PR and GR reflectivity data are fused using a neural network (NN)-based approach. The main steps included are: quality control of TRMM PR and GR reflectivity data; spatiotemporal matchup; GR calibration bias correction; conversion of TRMM PR data from Ku to S band; fusion of TRMM PR and GR reflectivity data with an NN method; interpolation of reflectivity data that are below PR's sensitivity; blind areas compensation with a distance weighting-based merging approach; combination of three types of data: data with the NN method, data below PR's sensitivity and data within compensated blind areas. During the NN fusion step, the TRMM PR data are taken as targets of the training NNs, and gridded GR data after horizontal downsampling at different heights are used as the input. The trained NNs are then used to obtain 3D high-resolution reflectivity from the original GR gridded data. After 3D fusion of the TRMM PR and GR reflectivity data, a more complete and finer-scale 3D radar reflectivity dataset incorporating characteristics from both the TRMM PR and GR observations can be obtained. The fused reflectivity data are evaluated based on a convective precipitation event through comparison with the high resolution TRMM PR and GR data with an interpolation algorithm.

  8. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  9. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  10. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  11. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  12. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  13. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  14. Designing Facilities for Collaborative Operations

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.

  15. Planning and Designing School Computer Facilities. Interim Report.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Finance and Administration Div.

    This publication provides suggestions and considerations that may be useful for school jurisdictions developing facilities for computers in schools. An interim report for both use and review, it is intended to assist school system planners in clarifying the specifications needed by the architects, other design consultants, and purchasers involved.…

  16. Molecular Modeling and Computational Chemistry at Humboldt State University.

    ERIC Educational Resources Information Center

    Paselk, Richard A.; Zoellner, Robert W.

    2002-01-01

    Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…

  17. Strategy and methodology for rank-ordering Virginia state agencies regarding solar attractiveness and identification of specific project possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewett, R.

    1997-12-31

    This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less

  18. A survey of the computer literacy of undergraduate dental students at a University Dental School in Ireland during the academic year 1997-98.

    PubMed

    Ray, N J; Hannigan, A

    1999-05-01

    As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.

  19. Adolescents' physical activity: competition between perceived neighborhood sport facilities and home media resources.

    PubMed

    Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing

    2010-04-01

    To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.

  20. Description and operational status of the National Transonic Facility computer complex

    NASA Technical Reports Server (NTRS)

    Boyles, G. B., Jr.

    1986-01-01

    This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.

  1. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  2. Trends in Facility Management Technology: The Emergence of the Internet, GIS, and Facility Assessment Decision Support.

    ERIC Educational Resources Information Center

    Teicholz, Eric

    1997-01-01

    Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…

  3. An ECG signals compression method and its validation using NNs.

    PubMed

    Fira, Catalina Monica; Goras, Liviu

    2008-04-01

    This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Intrator, Miranda Huang

    Many industrial catalysts used for homogeneous hydrogenation and dehydrogenation of unsaturated substrates are derived from metal complexes that include (air-sensitive) ligands that are often expensive and difficult to synthesize. In particular, catalysts used for many hydrogenations are based on phosphorus containing ligands (in particular PNP pincer systems). These ligands are often difficult to make, are costly, are constrained to having two carbon atoms in the ligand backbone and are susceptible to oxidation at phosphorus, making their use somewhat complicated. Los Alamos researchers have recently developed a new and novel set of ligands that are based on a NNS (ENENES) skeletonmore » (i.e. no phosphorus donors, just nitrogen and sulfur).« less

  5. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  6. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  7. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).

  8. Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities

    NASA Technical Reports Server (NTRS)

    Volkman, Gregrey T.; Dunn, Steven C.

    2013-01-01

    The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.

  9. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  10. THE EFFECTS OF COMPUTER-BASED FIRE SAFETY TRAINING ON THE KNOWLEDGE, ATTITUDES, AND PRACTICES OF CAREGIVERS

    PubMed Central

    Harrington, Susan S.; Walker, Bonnie L.

    2010-01-01

    Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929

  11. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  12. The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.

    ERIC Educational Resources Information Center

    Lach, Ivan J.

    The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…

  13. Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.

    PubMed

    Brodish, D L

    1998-01-01

    The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.

  14. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  15. Computer-Assisted School Facility Planning with ONPASS.

    ERIC Educational Resources Information Center

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  16. Technology in the Service of Creativity: Computer Assisted Writing Project--Stetson Middle School, Philadelphia, Pennsylvania. Final Report.

    ERIC Educational Resources Information Center

    Bender, Evelyn

    The American Library Association's Carroll Preston Baber Research Award supported this project on the use, impact and feasibility of a computer assisted writing facility located in the library of Stetson Middle School in Philadelphia, an inner city school with a population of minority, "at risk" students. The writing facility consisted…

  17. Sigma 2 Graphic Display Software Program Description

    NASA Technical Reports Server (NTRS)

    Johnson, B. T.

    1973-01-01

    A general purpose, user oriented graphic support package was implemented. A comprehensive description of the two software components comprising this package is given: Display Librarian and Display Controller. These programs have been implemented in FORTRAN on the XDS Sigma 2 Computer Facility. This facility consists of an XDS Sigma 2 general purpose computer coupled to a Computek Display Terminal.

  18. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  19. Some propulsion system noise data handling conventions and computer programs used at the Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Montegani, F. J.

    1974-01-01

    Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.

  20. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  1. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1998-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  2. Nonequilibrium Supersonic Freestream Studied Using Coherent Anti-Stokes Raman Spectroscopy

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Cantu, Luca M.; Gallo, Emanuela C. A.; Baurle, Rob; Danehy, Paul M.; Rockwell, Robert; Goyne, Christopher; McDaniel, Jim

    2015-01-01

    Measurements were conducted at the University of Virginia Supersonic Combustion Facility of the flow in a constant-area duct downstream of a Mach 2 nozzle. The airflow was heated to approximately 1200 K in the facility heater upstream of the nozzle. Dual-pump coherent anti-Stokes Raman spectroscopy was used to measure the rotational and vibrational temperatures of N2 and O2 at two planes in the duct. The expectation was that the vibrational temperature would be in equilibrium, because most scramjet facilities are vitiated air facilities and are in vibrational equilibrium. However, with a flow of clean air, the vibrational temperature of N2 along a streamline remains approximately constant between the measurement plane and the facility heater, the vibrational temperature of O2 in the duct is about 1000 K, and the rotational temperature is consistent with the isentropic flow. The measurements of N2 vibrational temperature enabled cross-stream nonuniformities in the temperature exiting the facility heater to be documented. The measurements are in agreement with computational fluid dynamics models employing separate lumped vibrational and translational/rotational temperatures. Measurements and computations are also reported for a few percent steam addition to the air. The effect of the steam is to bring the flow to thermal equilibrium, also in agreement with the computational fluid dynamics.

  3. JESS facility modification and environmental/power plans

    NASA Technical Reports Server (NTRS)

    Bordeaux, T. A.

    1984-01-01

    Preliminary plans for facility modifications and environmental/power systems for the JESS (Joint Exercise Support System) computer laboratory and Freedom Hall are presented. Blueprints are provided for each of the facilities and an estimate of the air conditioning requirements is given.

  4. Ergonomic and Anthropometric Considerations of the Use of Computers in Schools by Adolescents

    ERIC Educational Resources Information Center

    Jermolajew, Anna M.; Newhouse, C. Paul

    2003-01-01

    Over the past decade there has been an explosion in the provision of computing facilities in schools for student use. However, there is concern that the development of these facilities has often given little regard to the ergonomics of the design for use by children, particularly adolescents. This paper reports on a study that investigated the…

  5. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  6. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  7. 117. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  8. 122. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  9. 118. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  10. 121. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  11. Making Cloud Computing Available For Researchers and Innovators (Invited)

    NASA Astrophysics Data System (ADS)

    Winsor, R.

    2010-12-01

    High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.

  12. The development of the Canadian Mobile Servicing System Kinematic Simulation Facility

    NASA Technical Reports Server (NTRS)

    Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.

    1989-01-01

    Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.

  13. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  14. Public computing options for individuals with cognitive impairments: survey outcomes.

    PubMed

    Fox, Lynn Elizabeth; Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Prideaux, Jason

    2009-09-01

    To examine availability and accessibility of public computing for individuals with cognitive impairment (CI) who reside in the USA. A telephone survey was administered as a semi-structured interview to 145 informants representing seven types of public facilities across three geographically distinct regions using a snowball sampling technique. An Internet search of wireless (Wi-Fi) hotspots supplemented the survey. Survey results showed the availability of public computer terminals and Internet hotspots was greatest in the urban sample, followed by the mid-sized and rural cities. Across seven facility types surveyed, libraries had the highest percentage of access barriers, including complex queue procedures, login and password requirements, and limited technical support. University assistive technology centres and facilities with a restricted user policy, such as brain injury centres, had the lowest incidence of access barriers. Findings suggest optimal outcomes for people with CI will result from a careful match of technology and the user that takes into account potential barriers and opportunities to computing in an individual's preferred public environments. Trends in public computing, including the emergence of widespread Wi-Fi and limited access to terminals that permit auto-launch applications, should guide development of technology designed for use in public computing environments.

  15. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  16. Solid State Clipper Diodes for High Power Modulators.

    DTIC Science & Technology

    1978-11-01

    modeled at low powers and later confirmed in actua l P W pulsar operation. 0~ \\ ~~~~~~~~~ . ~~~~~ .. . .— - - I. ~~~~~ 3 J~ItV~ . \\ W \\_ UNC l ASSIFIE...and CG is the di ide api-i tance to 1avg — Ip ~ j- ground . In our design the worst case diode leakage (I 2( lO ~C) was 15 milliamperes (mA) at I kV...without it. I2rms 1p 2 ~~ ( 4) the diode junction capacitance and stray l’nns — 5 x lO ~ A 2 capacitance affect the voltage division whenever the

  17. 1981 Beaufort Sea Current Records: A Data Report. Volume I.

    DTIC Science & Technology

    1982-06-01

    NN/S) ( hm /S) (DEG) (thMI/S) (DEGC) 04/t1/81 02:24 98. -21. 102 100.04/11/81 02:39 6. -41. 115 95. -1.71 04/11/81 02154 97. -57. 120 113. -1.72 04/11/81...69. -1.71 ........................................ DATE TINE EAST NORTH DIR SPEED TEMP , C("/S) ( HM /S) (DEG) (Mil/S) (DEGC) 04/20/18 03:09 -64. 26. 292...40. -22. 118 46. -1.66 05/04/81 10:54 4. -17. 165 18. -1.64 05/04/1 11:09 31. -13. 112 34. -1.64 4 DATE TIME EAST HORTH DIR SPEED TEMP (MM/S) ( HM /S

  18. Near-Net Shape Powder Metallurgy Rhenium Thruster

    NASA Technical Reports Server (NTRS)

    Leonhardt, Todd; Hamister, Mark; Carlen, Jan C.; Biaglow, James; Reed, Brian

    2001-01-01

    This paper describes the development of a method to produce a near-net shape (NNS) powder metallurgy (PM) rhenium combustion chamber of the size 445 N (100 lbf) used in a high performance liquid apogee engine. These engines are used in low earth Orbit and geostationary orbit for satellite positioning systems. The developments in near-net shape powder metallurgy rhenium combustion chambers reported in this paper will reduce manufacturing cost of the rhenium chambers by 25 percent, and reduce the manufacturing time by 30 to 40 percent. The quantity of rhenium metal powder used to produce a rhenium chamber is reduced by approximately 70 percent and the subsequent reduction in machining schedule and costs is nearly 50 percent.

  19. A new learning algorithm for a fully connected neuro-fuzzy inference system.

    PubMed

    Chen, C L Philip; Wang, Jing; Wang, Chi-Hsu; Chen, Long

    2014-10-01

    A traditional neuro-fuzzy system is transformed into an equivalent fully connected three layer neural network (NN), namely, the fully connected neuro-fuzzy inference systems (F-CONFIS). The F-CONFIS differs from traditional NNs by its dependent and repeated weights between input and hidden layers and can be considered as the variation of a kind of multilayer NN. Therefore, an efficient learning algorithm for the F-CONFIS to cope these repeated weights is derived. Furthermore, a dynamic learning rate is proposed for neuro-fuzzy systems via F-CONFIS where both premise (hidden) and consequent portions are considered. Several simulation results indicate that the proposed approach achieves much better accuracy and fast convergence.

  20. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  1. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  2. Key Issues in Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Wozny, Michael J.

    1981-01-01

    Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…

  3. EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL - 2002

    EPA Science Inventory

    To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...

  4. Telecommunications and Data Communication in Korea.

    ERIC Educational Resources Information Center

    Ahn, Moon-Suk

    All facilities of the Ministry of Communications of Korea, which monopolizes telecommunications services in the country, are listed and described. Both domestic facilities, including long-distance telephone and telegraph circuits, and international connections are included. Computer facilities are also listed. The nation's regulatory policies are…

  5. Overview of the NASA Dryden Flight Research Facility aeronautical flight projects

    NASA Technical Reports Server (NTRS)

    Meyer, Robert R., Jr.

    1992-01-01

    Several principal aerodynamics flight projects of the NASA Dryden Flight Research Facility are discussed. Key vehicle technology areas from a wide range of flight vehicles are highlighted. These areas include flight research data obtained for ground facility and computation correlation, applied research in areas not well suited to ground facilities (wind tunnels), and concept demonstration.

  6. Sea/Lake Water Air Conditioning at Naval Facilities.

    DTIC Science & Technology

    1980-05-01

    ECONOMICS AT TWO FACILITIES ......... ................... 2 Facilities ........... .......................... 2 Computer Models...of an operational test at Naval Security Group Activity (NSGA) Winter Harbor, Me., and the economics of Navywide application. In FY76 an assessment of... economics of Navywide application of sea/lake water AC indicated that cost and energy savings at the sites of some Naval facilities are possible, depending

  7. Automated smear counting and data processing using a notebook computer in a biomedical research facility.

    PubMed

    Ogata, Y; Nishizawa, K

    1995-10-01

    An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.

  8. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  9. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  10. Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers.

    PubMed

    Mougiakakou, Stavroula G; Valavanis, Ioannis K; Nikita, Alexandra; Nikita, Konstantina S

    2007-09-01

    The aim of the present study is to define an optimally performing computer-aided diagnosis (CAD) architecture for the classification of liver tissue from non-enhanced computed tomography (CT) images into normal liver (C1), hepatic cyst (C2), hemangioma (C3), and hepatocellular carcinoma (C4). To this end, various CAD architectures, based on texture features and ensembles of classifiers (ECs), are comparatively assessed. Number of regions of interests (ROIs) corresponding to C1-C4 have been defined by experienced radiologists in non-enhanced liver CT images. For each ROI, five distinct sets of texture features were extracted using first order statistics, spatial gray level dependence matrix, gray level difference method, Laws' texture energy measures, and fractal dimension measurements. Two different ECs were constructed and compared. The first one consists of five multilayer perceptron neural networks (NNs), each using as input one of the computed texture feature sets or its reduced version after genetic algorithm-based feature selection. The second EC comprised five different primary classifiers, namely one multilayer perceptron NN, one probabilistic NN, and three k-nearest neighbor classifiers, each fed with the combination of the five texture feature sets or their reduced versions. The final decision of each EC was extracted by using appropriate voting schemes, while bootstrap re-sampling was utilized in order to estimate the generalization ability of the CAD architectures based on the available relatively small-sized data set. The best mean classification accuracy (84.96%) is achieved by the second EC using a fused feature set, and the weighted voting scheme. The fused feature set was obtained after appropriate feature selection applied to specific subsets of the original feature set. The comparative assessment of the various CAD architectures shows that combining three types of classifiers with a voting scheme, fed with identical feature sets obtained after appropriate feature selection and fusion, may result in an accurate system able to assist differential diagnosis of focal liver lesions from non-enhanced CT images.

  11. Energy consumption and load profiling at major airports. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.

    1998-12-01

    This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less

  12. THE COMPUTER AS A MANAGEMENT TOOL--PHYSICAL FACILITIES INVENTORIES, UTILIZATION, AND PROJECTIONS. 11TH ANNUAL MACHINE RECORDS CONFERENCE PROCEEDINGS (UNIVERSITY OF TENNESSEE, KNOXVILLE, APRIL 25-27, 1966).

    ERIC Educational Resources Information Center

    WITMER, DAVID R.

    WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…

  13. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  14. 120. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  15. 119. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  16. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  17. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  18. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  19. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  20. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system.

    PubMed

    Al-Masni, Mohammed A; Al-Antari, Mugahed A; Park, Jeong-Min; Gi, Geon; Kim, Tae-Yeon; Rivera, Patricio; Valarezo, Edwin; Choi, Mun-Taek; Han, Seung-Moo; Kim, Tae-Seong

    2018-04-01

    Automatic detection and classification of the masses in mammograms are still a big challenge and play a crucial role to assist radiologists for accurate diagnosis. In this paper, we propose a novel Computer-Aided Diagnosis (CAD) system based on one of the regional deep learning techniques, a ROI-based Convolutional Neural Network (CNN) which is called You Only Look Once (YOLO). Although most previous studies only deal with classification of masses, our proposed YOLO-based CAD system can handle detection and classification simultaneously in one framework. The proposed CAD system contains four main stages: preprocessing of mammograms, feature extraction utilizing deep convolutional networks, mass detection with confidence, and finally mass classification using Fully Connected Neural Networks (FC-NNs). In this study, we utilized original 600 mammograms from Digital Database for Screening Mammography (DDSM) and their augmented mammograms of 2,400 with the information of the masses and their types in training and testing our CAD. The trained YOLO-based CAD system detects the masses and then classifies their types into benign or malignant. Our results with five-fold cross validation tests show that the proposed CAD system detects the mass location with an overall accuracy of 99.7%. The system also distinguishes between benign and malignant lesions with an overall accuracy of 97%. Our proposed system even works on some challenging breast cancer cases where the masses exist over the pectoral muscles or dense regions. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less

  2. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  3. The Education Value of Cloud Computing

    ERIC Educational Resources Information Center

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  4. Writing Apprehension, Computer Anxiety and Telecomputing: A Pilot Study.

    ERIC Educational Resources Information Center

    Harris, Judith; Grandgenett, Neal

    1992-01-01

    A study measured graduate students' writing apprehension and computer anxiety levels before and after using electronic mail, computer conferencing, and remote database searching facilities during an educational technology course. Results indicted postcourse computer anxiety levels significantly related to usage statistics. Precourse writing…

  5. Los Alamos Plutonium Facility Waste Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K.; Montoya, A.; Wieneke, R.

    1997-02-01

    This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facilitymore » on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process.« less

  6. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  7. BUTIMBA: Intensifying the Hunt for Child TB in Swaziland through Household Contact Tracing

    PubMed Central

    Alonso Ustero, Pilar; Golin, Rachel; Anabwani, Florence; Mzileni, Bulisile; Sikhondze, Welile; Stevens, Robert

    2017-01-01

    Background Limited data exists to inform contact tracing guidelines in children and HIV-affected populations. We evaluated the yield and additionality of household contact and source case investigations in Swaziland, a TB/HIV high-burden setting, while prioritizing identification of childhood TB. Methods In partnership with 7 local TB clinics, we implemented standardized contact tracing of index cases (IC) receiving TB treatment. Prioritizing child contacts and HIV-affected households, screening officers screened contacts for TB symptoms and to identify risk factors associated with TB. We ascertained factors moderating the yield of contact tracing and measured the impact of our program by additional notifications. Results From March 2013 to November 2015, 3,258 ICs (54% bacteriologically confirmed; 70% HIV-infected; 85% adults) were enrolled leading to evaluation of 12,175 contacts (median age 18 years, IQR 24–42; 45% children; 9% HIV-infected). Among contacts, 196 TB cases (56% bacteriologically confirmed) were diagnosed resulting in a program yield of 1.6% for all forms of TB. The number needed to screen (NNS) to identify a bacteriologically confirmed TB case or all forms TB case traced from a child IC <5 years was respectively 62% and 40% greater than the NNS for tracing from an adult IC. In year one, we demonstrated a 32% increase in detection of bacteriologically confirmed child TB. Contacts were more likely to have TB if <5 years (OR = 2.0), HIV-infected (OR = 4.9), reporting ≥1 TB symptoms (OR = 7.7), and sharing a bed (OR = 1.7) or home (OR = 1.4) with the IC. There was a 1.4 fold increased chance of detecting a TB case in households known to be HIV-affected. Conclusion Contact tracing prioritizing children is not only feasible in a TB/HIV high-burden setting but contributes to overall case detection. Our findings support WHO guidelines prioritizing contact tracing among children and HIV-infected populations while highlighting potential to integrate TB and HIV case finding. PMID:28107473

  8. The relationship between erythrocyte membrane fatty acid levels and cardiac autonomic function in obese children.

    PubMed

    Mustafa, Gulgun; Kursat, Fidanci Muzaffer; Ahmet, Tas; Alparslan, Genc Fatih; Omer, Gunes; Sertoglu, Erdem; Erkan, Sarı; Ediz, Yesilkaya; Turker, Turker; Ayhan, Kılıc

    Childhood obesity is a worldwide health concern. Studies have shown autonomic dysfunction in obese children. The exact mechanism of this dysfunction is still unknown. The aim of this study was to assess the relationship between erythrocyte membrane fatty acid (EMFA) levels and cardiac autonomic function in obese children using heart rate variability (HRV). A total of 48 obese and 32 healthy children were included in this case-control study. Anthropometric and biochemical data, HRV indices, and EMFA levels in both groups were compared statistically. HRV parameters including standard deviation of normal-to-normal R-R intervals (NN), root mean square of successive differences, the number of pairs of successive NNs that differ by >50 ms (NN50), the proportion of NN50 divided by the total number of NNs, high-frequency power, and low-frequency power were lower in obese children compared to controls, implying parasympathetic impairment. Eicosapentaenoic acid and docosahexaenoic acid levels were lower in the obese group (p<0.001 and p=0.012, respectively). In correlation analysis, in the obese group, body mass index standard deviation and linoleic acid, arachidonic acid, triglycerides, and high-density lipoprotein levels showed a linear correlation with one or more HRV parameter, and age, eicosapentaenoic acid, and systolic and diastolic blood pressure correlated with mean heart rate. In linear regression analysis, age, dihomo-gamma-linolenic acid, linoleic acid, arachidonic acid, body mass index standard deviation, systolic blood pressure, triglycerides, low-density lipoprotein and high-density lipoprotein were related to HRV parameters, implying an effect on cardiac autonomic function. There is impairment of cardiac autonomic function in obese children. It appears that levels of EMFAs such as linoleic acid, arachidonic acid and dihomo-gamma-linolenic acid play a role in the regulation of cardiac autonomic function in obese children. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  10. Quantifying the Causes of Differences in Tropospheric OH Within Global Models

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Salawitch, Ross J.; Canty, Timothy; Anderson, Daniel C.; Arnold, Steve R.; Chipperfield, Martyn P.; Emmons, Louisa K.; Flemming, Johannes; Huijnen, Vincent; Kinnison, Douglas E.; hide

    2017-01-01

    The hydroxyl radical (OH) is the primary daytime oxidant in the troposphere and provides the main loss mechanism for many pollutants and greenhouse gases, including methane (CH4). Global mean tropospheric OH differs by as much as 80% among various global models, for reasons that are not well understood. We use neural networks (NNs), trained using archived output from eight chemical transport models (CTMs) that participated in the Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, of Climate, Chemistry, Aerosols and Transport Model Intercomparison Project (POLMIP), to quantify the factors responsible for differences in tropospheric OH and resulting CH4 lifetime (Tau CH4) between these models. Annual average Tau CH4, for loss by OH only, ranges from 8.0 to 11.6 years for the eight POLMIP CTMs. The factors driving these differences were quantified by inputting 3-D chemical fields from one CTM into the trained NN of another CTM. Across all CTMs, the largest mean differences in Tau CH4 (Delta Tau CH4) result from variations in chemical mechanisms (Delta Tau CH4 = 0.46 years), the photolysis frequency (J) of O3 yields O(D-1) (0.31 years), local O3 (0.30 years), and CO (0.23 years). The Delta Tau CH4 due to CTM differences in NO(x) (NO + NO2) is relatively low (0.17 years), although large regional variation in OH between the CTMs is attributed to NO(x). Differences in isoprene and J(NO2) have negligible overall effect on globally averaged tropospheric OH, although the extent of OH variations due to each factor depends on the model being examined. This study demonstrates that NNs can serve as a useful tool for quantifying why tropospheric OH varies between global models, provided that essential chemical fields are archived.

  11. Groundwater phosphorus in forage-based landscape with cow-calf operation.

    PubMed

    Sigua, Gilbert C; Chase, Chad C

    2014-02-01

    Forage-based cow-calf operations may have detrimental impacts on the chemical status of groundwater and streams and consequently on the ecological and environmental status of surrounding ecosystems. Assessing and controlling phosphorus (P) inputs are, thus, considered the key to reducing eutrophication and managing ecological integrity. In this paper, we monitored and evaluated P concentrations of groundwater (GW) compared to the concentration of surface water (SW) P in forage-based landscape with managed cow-calf operations for 3 years (2007-2009). Groundwater samples were collected from three landscape locations along the slope gradient (GW1 10-30% slope, GW2 5-10% slope, and GW3 0-5% slope). Surface water samples were collected from the seepage area (SW 0% slope) located at the bottom of the landscape. Of the total P collected (averaged across year) in the landscape, 62.64% was observed from the seepage area or SW compared with 37.36% from GW (GW1 = 8.01%; GW2 = 10.92%; GW3 = 18.43%). Phosphorus in GW ranged from 0.02 to 0.20 mg L(-1) while P concentration in SW ranged from 0.25 to 0.71 mg L(-1). The 3-year average of P in GW of 0.09 mg L(-1) was lower than the recommended goal or the Florida's numeric nutrients standards (NNS) of 0.12 mg P L(-1). The 3-year average of P concentration in SW of 0.45 mg L(-1) was about fourfold higher than the Florida's NNS value. Results suggest that cow-calf operation in pasture-based landscape would contribute more P to SW than in the GW. The risk of GW contamination by P from animal agriculture production system is limited, while the solid forms of P subject to loss via soil erosion could be the major water quality risk from P.

  12. Sequence analysis of the L protein of the Ebola 2014 outbreak: Insight into conserved regions and mutations.

    PubMed

    Ayub, Gohar; Waheed, Yasir

    2016-06-01

    The 2014 Ebola outbreak was one of the largest that have occurred; it started in Guinea and spread to Nigeria, Liberia and Sierra Leone. Phylogenetic analysis of the current virus species indicated that this outbreak is the result of a divergent lineage of the Zaire ebolavirus. The L protein of Ebola virus (EBOV) is the catalytic subunit of the RNA‑dependent RNA polymerase complex, which, with VP35, is key for the replication and transcription of viral RNA. Earlier sequence analysis demonstrated that the L protein of all non‑segmented negative‑sense (NNS) RNA viruses consists of six domains containing conserved functional motifs. The aim of the present study was to analyze the presence of these motifs in 2014 EBOV isolates, highlight their function and how they may contribute to the overall pathogenicity of the isolates. For this purpose, 81 2014 EBOV L protein sequences were aligned with 475 other NNS RNA viruses, including Paramyxoviridae and Rhabdoviridae viruses. Phylogenetic analysis of all EBOV outbreak L protein sequences was also performed. Analysis of the amino acid substitutions in the 2014 EBOV outbreak was conducted using sequence analysis. The alignment demonstrated the presence of previously conserved motifs in the 2014 EBOV isolates and novel residues. Notably, all the mutations identified in the 2014 EBOV isolates were tolerant, they were pathogenic with certain examples occurring within previously determined functional conserved motifs, possibly altering viral pathogenicity, replication and virulence. The phylogenetic analysis demonstrated that all sequences with the exception of the 2014 EBOV sequences were clustered together. The 2014 EBOV outbreak has acquired a great number of mutations, which may explain the reasons behind this unprecedented outbreak. Certain residues critical to the function of the polymerase remain conserved and may be targets for the development of antiviral therapeutic agents.

  13. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  14. Neutron spectrometry and dosimetry study at two research nuclear reactors using Bonner sphere spectrometer (BSS), rotational spectrometer (ROSPEC) and cylindrical nested neutron spectrometer (NNS).

    PubMed

    Atanackovic, J; Matysiak, W; Hakmana Witharana, S S; Aslam, I; Dubeau, J; Waker, A J

    2013-01-01

    Neutron spectrometry and subsequent dosimetry measurements were undertaken at the McMaster Nuclear Reactor (MNR) and AECL Chalk River National Research Universal (NRU) Reactor. The instruments used were a Bonner sphere spectrometer (BSS), a cylindrical nested neutron spectrometer (NNS) and a commercially available rotational proton recoil spectrometer. The purposes of these measurements were to: (1) compare the results obtained by three different neutron measuring instruments and (2) quantify neutron fields of interest. The results showed vastly different neutron spectral shapes for the two different reactors. This is not surprising, considering the type of the reactors and the locations where the measurements were performed. MNR is a heavily shielded light water moderated reactor, while NRU is a heavy water moderated reactor. The measurements at MNR were taken at the base of the reactor pool, where a large amount of water and concrete shielding is present, while measurements at NRU were taken at the top of the reactor (TOR) plate, where there is only heavy water and steel between the reactor core and the measuring instrument. As a result, a large component of the thermal neutron fluence was measured at MNR, while a negligible amount of thermal neutrons was measured at NRU. The neutron ambient dose rates at NRU TOR were measured to be between 0.03 and 0.06 mSv h⁻¹, while at MNR, these values were between 0.07 and 2.8 mSv h⁻¹ inside the beam port and <0.2 mSv h⁻¹ between two operating beam ports. The conservative uncertainty of these values is 15 %. The conservative uncertainty of the measured integral neutron fluence is 5 %. It was also found that BSS over-responded slightly due to a non-calibrated response matrix.

  15. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  16. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients.

    PubMed

    Taylor, Michael J; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2017-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility.

  17. Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.

    PubMed

    Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton

    2007-09-15

    Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.

  18. Virtual worlds to support patient group communication? A questionnaire study investigating potential for virtual world focus group use by respiratory patients

    PubMed Central

    Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah

    2015-01-01

    Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility. PMID:28239187

  19. Providing security for automated process control systems at hydropower engineering facilities

    NASA Astrophysics Data System (ADS)

    Vasiliev, Y. S.; Zegzhda, P. D.; Zegzhda, D. P.

    2016-12-01

    This article suggests the concept of a cyberphysical system to manage computer security of automated process control systems at hydropower engineering facilities. According to the authors, this system consists of a set of information processing tools and computer-controlled physical devices. Examples of cyber attacks on power engineering facilities are provided, and a strategy of improving cybersecurity of hydropower engineering systems is suggested. The architecture of the multilevel protection of the automated process control system (APCS) of power engineering facilities is given, including security systems, control systems, access control, encryption, secure virtual private network of subsystems for monitoring and analysis of security events. The distinctive aspect of the approach is consideration of interrelations and cyber threats, arising when SCADA is integrated with the unified enterprise information system.

  20. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  1. IMPLEMENTATION OF USEPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL (MFFP2T) - 2003

    EPA Science Inventory

    To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...

  2. An inventory of aeronautical ground research facilities. Volume 4: Engineering flight simulation facilities

    NASA Technical Reports Server (NTRS)

    Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.

    1971-01-01

    The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.

  3. An Electronic Pressure Profile Display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  4. An electronic pressure profile display system for aeronautic test facilities

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.

    1990-01-01

    The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.

  5. ComputerTown: A Do-It-Yourself Community Computer Project. [Computer Town, USA and Other Microcomputer Based Alternatives to Traditional Learning Environments].

    ERIC Educational Resources Information Center

    Zamora, Ramon M.

    Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…

  6. Race, Wealth, and Solid Waste Facilities in North Carolina

    PubMed Central

    Norton, Jennifer M.; Wing, Steve; Lipscomb, Hester J.; Kaufman, Jay S.; Marshall, Stephen W.; Cravey, Altha J.

    2007-01-01

    Background Concern has been expressed in North Carolina that solid waste facilities may be disproportionately located in poor communities and in communities of color, that this represents an environmental injustice, and that solid waste facilities negatively impact the health of host communities. Objective Our goal in this study was to conduct a statewide analysis of the location of solid waste facilities in relation to community race and wealth. Methods We used census block groups to obtain racial and economic characteristics, and information on solid waste facilities was abstracted from solid waste facility permit records. We used logistic regression to compute prevalence odds ratios for 2003, and Cox regression to compute hazard ratios of facilities issued permits between 1990 and 2003. Results The adjusted prevalence odds of a solid waste facility was 2.8 times greater in block groups with ≥50% people of color compared with block groups with < 10% people of color, and 1.5 times greater in block groups with median house values < $60,000 compared with block groups with median house values ≥$100,000. Among block groups that did not have a previously permitted solid waste facility, the adjusted hazard of a new permitted facility was 2.7 times higher in block groups with ≥50% people of color compared with block groups with < 10% people of color. Conclusion Solid waste facilities present numerous public health concerns. In North Carolina solid waste facilities are disproportionately located in communities of color and low wealth. In the absence of action to promote environmental justice, the continued need for new facilities could exacerbate this environmental injustice. PMID:17805426

  7. Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Wang, Yanyan

    2010-05-01

    In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.

  8. How Non-nutritive Sweeteners Influence Hormones and Health.

    PubMed

    Rother, Kristina I; Conway, Ellen M; Sylvetsky, Allison C

    2018-07-01

    Non-nutritive sweeteners (NNSs) elicit a multitude of endocrine effects in vitro, in animal models, and in humans. The best-characterized consequences of NNS exposure are metabolic changes, which may be mediated by activation of sweet taste receptors in oral and extraoral tissues (e.g., intestine, pancreatic β cells, and brain), and alterations of the gut microbiome. These mechanisms are likely synergistic and may differ across species and chemically distinct NNSs. However, the extent to which these hormonal effects are clinically relevant in the context of human consumption is unclear. Further investigation following prolonged exposure is required to better understand the role of NNSs in human health, with careful consideration of genetic, dietary, anthropometric, and other interindividual differences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Antibody VH and VL recombination using phage and ribosome display technologies reveals distinct structural routes to affinity improvements with VH-VL interface residues providing important structural diversity

    PubMed Central

    Groves, Maria AT; Amanuel, Lily; Campbell, Jamie I; Rees, D Gareth; Sridharan, Sudharsan; Finch, Donna K; Lowe, David C; Vaughan, Tristan J

    2014-01-01

    In vitro selection technologies are an important means of affinity maturing antibodies to generate the optimal therapeutic profile for a particular disease target. Here, we describe the isolation of a parent antibody, KENB061 using phage display and solution phase selections with soluble biotinylated human IL-1R1. KENB061 was affinity matured using phage display and targeted mutagenesis of VH and VL CDR3 using NNS randomization. Affinity matured VHCDR3 and VLCDR3 library blocks were recombined and selected using phage and ribosome display protocol. A direct comparison of the phage and ribosome display antibodies generated was made to determine their functional characteristics. PMID:24256948

  10. Space technology test facilities at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Rodrigues, Annette T.

    1990-01-01

    The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.

  11. NASA low-speed centrifugal compressor for 3-D viscous code assessment and fundamental flow physics research

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.

    1991-01-01

    A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.

  12. Computers in Schools: White Boys Only?

    ERIC Educational Resources Information Center

    Hammett, Roberta F.

    1997-01-01

    Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)

  13. 20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  14. Launch Site Computer Simulation and its Application to Processes

    NASA Technical Reports Server (NTRS)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  15. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  16. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  17. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  18. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...

  19. Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Marine Barracks, Intersection of Tower Drive & Morse Street, Makaha, Honolulu County, HI

  20. Logistics in the Computer Lab.

    ERIC Educational Resources Information Center

    Cowles, Jim

    1989-01-01

    Discusses ways to provide good computer laboratory facilities for elementary and secondary schools. Topics discussed include establishing the computer lab and selecting hardware; types of software; physical layout of the room; printers; networking possibilities; considerations relating to the physical environment; and scheduling methods. (LRW)

  1. Computer-Aided Engineering Education at the K.U. Leuven.

    ERIC Educational Resources Information Center

    Snoeys, R.; Gobin, R.

    1987-01-01

    Describes some recent initiatives and developments in the computer-aided design program in the engineering faculty of the Katholieke Universiteit Leuven (Belgium). Provides a survey of the engineering curriculum, the computer facilities, and the main software packages available. (TW)

  2. 76 FR 59803 - Children's Online Privacy Protection Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...

  3. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  4. Influence of Computer-Aided Detection on Performance of Screening Mammography

    PubMed Central

    Fenton, Joshua J.; Taplin, Stephen H.; Carney, Patricia A.; Abraham, Linn; Sickles, Edward A.; D'Orsi, Carl; Berns, Eric A.; Cutter, Gary; Hendrick, R. Edward; Barlow, William E.; Elmore, Joann G.

    2011-01-01

    Background Computer-aided detection identifies suspicious findings on mammograms to assist radiologists. Since the Food and Drug Administration approved the technology in 1998, it has been disseminated into practice, but its effect on the accuracy of interpretation is unclear. Methods We determined the association between the use of computer-aided detection at mammography facilities and the performance of screening mammography from 1998 through 2002 at 43 facilities in three states. We had complete data for 222,135 women (a total of 429,345 mammograms), including 2351 women who received a diagnosis of breast cancer within 1 year after screening. We calculated the specificity, sensitivity, and positive predictive value of screening mammography with and without computer-aided detection, as well as the rates of biopsy and breast-cancer detection and the overall accuracy, measured as the area under the receiver-operating-characteristic (ROC) curve. Results Seven facilities (16%) implemented computer-aided detection during the study period. Diagnostic specificity decreased from 90.2% before implementation to 87.2% after implementation (P<0.001), the positive predictive value decreased from 4.1% to 3.2% (P = 0.01), and the rate of biopsy increased by 19.7% (P<0.001). The increase in sensitivity from 80.4% before implementation of computer-aided detection to 84.0% after implementation was not significant (P = 0.32). The change in the cancer-detection rate (including invasive breast cancers and ductal carcinomas in situ) was not significant (4.15 cases per 1000 screening mammograms before implementation and 4.20 cases after implementation, P = 0.90). Analyses of data from all 43 facilities showed that the use of computer-aided detection was associated with significantly lower overall accuracy than was nonuse (area under the ROC curve, 0.871 vs. 0.919; P = 0.005). Conclusions The use of computer-aided detection is associated with reduced accuracy of interpretation of screening mammograms. The increased rate of biopsy with the use of computer-aided detection is not clearly associated with improved detection of invasive breast cancer. PMID:17409321

  5. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  6. How Data Becomes Physics: Inside the RACF

    ScienceCinema

    Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris

    2018-06-22

    The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.

  7. Real-Gas Flow Properties for NASA Langley Research Center Aerothermodynamic Facilities Complex Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.

  8. Core commands across airway facilities systems.

    DOT National Transportation Integrated Search

    2003-05-01

    This study takes a high-level approach to evaluate computer systems without regard to the specific method of : interaction. This document analyzes the commands that Airway Facilities (AF) use across different systems and : the meanings attributed to ...

  9. Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, AD; Page, Christina; Lytle, Bob

    The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less

  10. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  11. ARC-1980-AC80-0512-2

    NASA Image and Video Library

    1980-06-05

    N-231 High Reynolds Number Channel Facility (An example of a Versatile Wind Tunnel) Tunnel 1 I is a blowdown Facility that utilizes interchangeable test sections and nozzles. The facility provides experimental support for the fluid mechanics research, including experimental verification of aerodynamic computer codes and boundary-layer and airfoil studies that require high Reynolds number simulation. (Tunnel 1)

  12. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  13. Evaluation of Visual Computer Simulator for Computer Architecture Education

    ERIC Educational Resources Information Center

    Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio

    2013-01-01

    This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…

  14. Hybrid Computation at Louisiana State University.

    ERIC Educational Resources Information Center

    Corripio, Armando B.

    Hybrid computation facilities have been in operation at Louisiana State University since the spring of 1969. In part, they consist of an Electronics Associates, Inc. (EAI) Model 680 analog computer, an EAI Model 693 interface, and a Xerox Data Systems (XDS) Sigma 5 digital computer. The hybrid laboratory is used in a course on hybrid computation…

  15. Computer Augmented Video Education.

    ERIC Educational Resources Information Center

    Sousa, M. B.

    1979-01-01

    Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)

  16. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  17. Hydrocode simulations of air and water shocks for facility vulnerability assessments.

    PubMed

    Clutter, J Keith; Stahl, Michael

    2004-01-02

    Hydrocodes are widely used in the study of explosive systems but their use in routine facility vulnerability assessments has been limited due to the computational resources typically required. These requirements are due to the fact that the majority of hydrocodes have been developed primarily for the simulation of weapon-scale phenomena. It is not practical to use these same numerical frameworks on the large domains found in facility vulnerability studies. Here, a hydrocode formulated specifically for facility vulnerability assessments is reviewed. Techniques used to accurately represent the explosive source while maintaining computational efficiency are described. Submodels for addressing other issues found in typical terrorist attack scenarios are presented. In terrorist attack scenarios, loads produced by shocks play an important role in vulnerability. Due to the difference in the material properties of water and air and interface phenomena, there exists significant contrast in wave propagation phenomena in these two medium. These physical variations also require special attention be paid to the mathematical and numerical models used in the hydrocodes. Simulations for a variety of air and water shock scenarios are presented to validate the computational models used in the hydrocode and highlight the phenomenological issues.

  18. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  19. Facilities | Computational Science | NREL

    Science.gov Websites

    technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources

  20. Sandia National Laboratories: Locations: Kauai Test Facility

    Science.gov Websites

    Defense Systems & Assessments About Defense Systems & Assessments Program Areas Accomplishments Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract

  1. Human use regulatory affairs advisor (HURAA): learning about research ethics with intelligent learning modules.

    PubMed

    Hu, Xiangen; Graesser, Arthur C

    2004-05-01

    The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.

  2. Oak Ridge Leadership Computing Facility Position Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  3. Astronaut Thomas Jones anchored to bunk facility while working on computer

    NASA Image and Video Library

    1994-04-14

    STS059-10-011 (9-20 April 1994) --- Astronaut Thomas D. Jones appears to have climbed out of bed right into his work in this onboard 35mm frame. Actually, Jones had anchored himself in the bunk facility while working on one of the onboard computers which transfered data to the ground via modem. The mission specialist was joined in space by five other NASA astronauts for a week and a half of support to the Space Radar Laboratory (SRL-1)/STS-59 mission.

  4. 77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    .... Facilities update. ESnet-5. Early Career technical talks. Co-design. Innovative and Novel Computational Impact on Theory and Experiment (INCITE). Public Comment (10-minute rule). Public Participation: The...

  5. Argonne's Magellan Cloud Computing Research Project

    ScienceCinema

    Beckman, Pete

    2017-12-11

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  6. Argonne's Magellan Cloud Computing Research Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Pete

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  7. A Plan for Community College Instructional Computing.

    ERIC Educational Resources Information Center

    Howard, Alan; And Others

    This document presents a comprehensive plan for future growth in instructional computing in the Washington community colleges. Two chapters define the curriculum objectives and content recommended for instructional courses in the community colleges which require access to computing facilities. The courses described include data processing…

  8. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  9. Users Guide for the National Transonic Facility Research Data System

    NASA Technical Reports Server (NTRS)

    Foster, Jean M.; Adcock, Jerry B.

    1996-01-01

    The National Transonic Facility is a complex cryogenic wind tunnel facility. This report briefly describes the facility, the data systems, and the instrumentation used to acquire research data. The computational methods and equations are discussed in detail and many references are listed for those who need additional technical information. This report is intended to be a user's guide, not a programmer's guide; therefore, the data reduction code itself is not documented. The purpose of this report is to assist personnel involved in conducting a test in the National Transonic Facility.

  10. Autonomous Electrothermal Facility for Oil Recovery Intensification Fed by Wind Driven Power Unit

    NASA Astrophysics Data System (ADS)

    Belsky, Aleksey A.; Dobush, Vasiliy S.

    2017-10-01

    This paper describes the structure of autonomous facility fed by wind driven power unit for intensification of viscous and heavy crude oil recovery by means of heat impact on productive strata. Computer based service simulation of this facility was performed. Operational energy characteristics were obtained for various operational modes of facility. The optimal resistance of heating element of the downhole heater was determined for maximum operating efficiency of wind power unit.

  11. Signal peptide discrimination and cleavage site identification using SVM and NN.

    PubMed

    Kazemian, H B; Yusuf, S A; White, K

    2014-02-01

    About 15% of all proteins in a genome contain a signal peptide (SP) sequence, at the N-terminus, that targets the protein to intracellular secretory pathways. Once the protein is targeted correctly in the cell, the SP is cleaved, releasing the mature protein. Accurate prediction of the presence of these short amino-acid SP chains is crucial for modelling the topology of membrane proteins, since SP sequences can be confused with transmembrane domains due to similar composition of hydrophobic amino acids. This paper presents a cascaded Support Vector Machine (SVM)-Neural Network (NN) classification methodology for SP discrimination and cleavage site identification. The proposed method utilises a dual phase classification approach using SVM as a primary classifier to discriminate SP sequences from Non-SP. The methodology further employs NNs to predict the most suitable cleavage site candidates. In phase one, a SVM classification utilises hydrophobic propensities as a primary feature vector extraction using symmetric sliding window amino-acid sequence analysis for discrimination of SP and Non-SP. In phase two, a NN classification uses asymmetric sliding window sequence analysis for prediction of cleavage site identification. The proposed SVM-NN method was tested using Uni-Prot non-redundant datasets of eukaryotic and prokaryotic proteins with SP and Non-SP N-termini. Computer simulation results demonstrate an overall accuracy of 0.90 for SP and Non-SP discrimination based on Matthews Correlation Coefficient (MCC) tests using SVM. For SP cleavage site prediction, the overall accuracy is 91.5% based on cross-validation tests using the novel SVM-NN model. © 2013 Published by Elsevier Ltd.

  12. Ten Commandments for Microcomputer Facility Planners.

    ERIC Educational Resources Information Center

    Espinosa, Leonard J.

    1991-01-01

    Presents factors involved in designing a microcomputer facility, including how computers will be used in the instructional program; educational specifications; planning committees; user input; quality of purchases; visual supervision considerations; location; workstation design; turnkey systems; electrical requirements; local area networks;…

  13. Supporting NASA Facilities Through GIS

    NASA Technical Reports Server (NTRS)

    Ingham, Mary E.

    2000-01-01

    The NASA GIS Team supports NASA facilities and partners in the analysis of spatial data. Geographic Information System (G[S) is an integration of computer hardware, software, and personnel linking topographic, demographic, utility, facility, image, and other geo-referenced data. The system provides a graphic interface to relational databases and supports decision making processes such as planning, design, maintenance and repair, and emergency response.

  14. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  15. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  16. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE PAGES

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...

    2016-02-18

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  17. Expanding Your Laboratory by Accessing Collaboratory Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, David W.; Burton, Sarah D.; Peterson, Michael R.

    2004-03-01

    The Environmental Molecular Sciences Laboratory (EMSL) in Richland, Washington, is the home of a research facility setup by the United States Department of Energy (DOE). The facility is atypical because it houses over 100 cutting-edge research systems for the use of researchers all over the United States and the world. Access to the lab is requested through a peer-review proposal process and the scientists who use the facility are generally referred to as ‘users’. There are six main research facilities housed in EMSL, all of which host visiting researchers. Several of these facilities also participate in the EMSL Collaboratory, amore » remote access capability supported by EMSL operations funds. Of these, the High-Field Magnetic Resonance Facility (HFMRF) and Molecular Science Computing Facility (MSCF) have a significant number of their users performing remote work. The HFMRF in EMSL currently houses 12 NMR spectrometers that range in magnet field strength from 7.05T to 21.1T. Staff associated with the NMR facility offers scientific expertise in the areas of structural biology, solid-state materials/catalyst characterization, and magnetic resonance imaging (MRI) techniques. The way in which the HFMRF operates, with a high level of dedication to remote operation across the full suite of High-Field NMR spectrometers, has earned it the name “Virtual NMR Facility”. This review will focus on the operational aspects of remote research done in the High-Field Magnetic Resonance Facility and the computer tools that make remote experiments possible.« less

  18. Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn

    In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less

  19. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  20. The NASA Ames 16-Inch Shock Tunnel Nozzle Simulations and Experimental Comparison

    NASA Technical Reports Server (NTRS)

    TokarcikPolsky, S.; Papadopoulos, P.; Venkatapathy, E.; Delwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    The 16-Inch Shock Tunnel at NASA Ames Research Center is a unique test facility used for hypersonic propulsion testing. To provide information necessary to understand the hypersonic testing of the combustor model, computational simulations of the facility nozzle were performed and results are compared with available experimental data, namely static pressure along the nozzle walls and pitot pressure at the exit of the nozzle section. Both quasi-one-dimensional and axisymmetric approaches were used to study the numerous modeling issues involved. The facility nozzle flow was examined for three hypersonic test conditions, and the computational results are presented in detail. The effects of variations in reservoir conditions, boundary layer growth, and parameters of numerical modeling are explored.

  1. Ground Software Maintenance Facility (GSMF) system manual

    NASA Technical Reports Server (NTRS)

    Derrig, D.; Griffith, G.

    1986-01-01

    The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.

  2. Some Computer-Based Developments in Sociology.

    ERIC Educational Resources Information Center

    Heise, David R.; Simmons, Roberta G.

    1985-01-01

    Discusses several ways in which computers are being used in sociology and how they continue to change this discipline. Areas considered include data collection, data analysis, simulations of social processes based on mathematical models, and problem areas (including standardization concerns, training, and the financing of computing facilities).…

  3. 7 CFR 1739.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... terrestrial technology having the capacity to provide transmission facilities that enable subscribers of the...) Computer Access Points and wireless access, that is used for the purposes of providing free access to and..., and after normal working hours and on Saturdays or Sunday. Computer Access Point means a new computer...

  4. Reliable Facility Location Problem with Facility Protection

    PubMed Central

    Tang, Luohao; Zhu, Cheng; Lin, Zaili; Shi, Jianmai; Zhang, Weiming

    2016-01-01

    This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed. PMID:27583542

  5. Rapid prototyping facility for flight research in artificial-intelligence-based flight systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Regenie, V. A.; Deets, D. A.

    1986-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  6. A rapid prototyping facility for flight research in advanced systems concepts

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Brumbaugh, Randal W.; Disbrow, James D.

    1989-01-01

    The Dryden Flight Research Facility of the NASA Ames Research Facility of the NASA Ames Research Center is developing a rapid prototyping facility for flight research in flight systems concepts that are based on artificial intelligence (AI). The facility will include real-time high-fidelity aircraft simulators, conventional and symbolic processors, and a high-performance research aircraft specially modified to accept commands from the ground-based AI computers. This facility is being developed as part of the NASA-DARPA automated wingman program. This document discusses the need for flight research and for a national flight research facility for the rapid prototyping of AI-based avionics systems and the NASA response to those needs.

  7. Verification of the causal relationship between subchronic exposures to dinotefuran and depression-related phenotype in juvenile mice.

    PubMed

    Takada, Tadashi; Yoneda, Naoki; Hirano, Tetsushi; Yanai, Shogo; Yamamoto, Anzu; Mantani, Youhei; Yokoyama, Toshifumi; Kitagawa, Hiroshi; Tabuchi, Yoshiaki; Hoshi, Nobuhiko

    2018-04-27

    It has been suggested that an increase in the use of pesticides affects neurodevelopment, but there has been no animal experiment showing a causal relation between neonicotinoid pesticides (NNs) and depression. We examined whether dinotefuran (DIN), the most widely used NN in Japan, induces depression. Male mice were administered DIN between 3 and 8 weeks of age, referring to the no-observed-effect level (NOEL). The mice were then subjected to a tail suspension test (TST) and a forced swimming test (FST). After these tests, their brains were dissected for immunohistochemical analyses of serotonin (5-HT). Antidepressant activity in TST and no decrease in 5-HT-positive cells were observed. The subchronic exposure to DIN alone in juvenile male mice may not cause depression-like indication.

  8. [(S)-1-Carbamoylethyl]bis(dimethylglyoximato-kappa2N,N')[(S)-1-phenylethylamine]cobalt(III) and bis(dimethylglyoximato-kappa2N,N')[(R)-1-(N-methylcarbamoyl)ethyl][(R)-1-phenylethylamine]cobalt(III) monohydrate.

    PubMed

    Orisaku, Keiko Komori; Hagiwara, Mieko; Ohgo, Yoshiki; Arai, Yoshifusa; Ohgo, Yoshiaki

    2005-04-01

    The title complexes, [Co(C3H6NO)(C4H7N2O2)2(C8H11N)] and [Co(C4H8NO)(C4H7N2O2)2(C8H11N)].H2O, were resolved from [(RS)-1-carbamoylethyl]bis(dimethylglyoximato)[(S)-1-phenylethylamine]cobalt(III) and bis(dimethylglyoximato)[(RS)-1-(N-methylcarbamoyl)ethyl][(R)-1-phenylethylamine]cobalt(III), respectively, and their crystal structures were determined in order to reveal the absolute configuration of the major enantiomer produced in the photoisomerization of each series of 2-carbamoylethyl and 2-(N-methylcarbamoyl)ethyl cobaloxime complexes.

  9. Cooperative learning neural network output feedback control of uncertain nonlinear multi-agent systems under directed topologies

    NASA Astrophysics Data System (ADS)

    Wang, W.; Wang, D.; Peng, Z. H.

    2017-09-01

    Without assuming that the communication topologies among the neural network (NN) weights are to be undirected and the states of each agent are measurable, the cooperative learning NN output feedback control is addressed for uncertain nonlinear multi-agent systems with identical structures in strict-feedback form. By establishing directed communication topologies among NN weights to share their learned knowledge, NNs with cooperative learning laws are employed to identify the uncertainties. By designing NN-based κ-filter observers to estimate the unmeasurable states, a new cooperative learning output feedback control scheme is proposed to guarantee that the system outputs can track nonidentical reference signals with bounded tracking errors. A simulation example is given to demonstrate the effectiveness of the theoretical results.

  10. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  11. Soviet Cybernetics Review. Volume 2, Number 5,

    DTIC Science & Technology

    prize; Aeroflot’s sirena system turned on; Computer system controls 2500 construction sites; Automation of aircraft languages; Diagnosis by teletype; ALGEM-1 and ALGEM-2 languages; Nuclear institute’s computer facilities.

  12. INTERIOR; VIEW OF ENTRY HALL, LOOKING SOUTH. Naval Computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR; VIEW OF ENTRY HALL, LOOKING SOUTH. - Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Marine Barracks, Intersection of Tower Drive & Morse Street, Makaha, Honolulu County, HI

  13. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    PubMed

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  14. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau

  15. Simplifying Facility and Event Scheduling: Saving Time and Money.

    ERIC Educational Resources Information Center

    Raasch, Kevin

    2003-01-01

    Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)

  16. Designing Communication and Learning Environments.

    ERIC Educational Resources Information Center

    Gayeski, Diane M., Ed.

    Designing and remodeling educational facilities are becoming more complex with options that include computer-based collaboration, classrooms with multimedia podiums, conference centers, and workplaces with desktop communication systems. This book provides a collection of articles that address educational facility design categorized in the…

  17. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  18. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  19. 45 CFR 1614.3 - Range of activities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... assistance, research, advice and counsel, or the use of recipient facilities, libraries, computer assisted... bono basis through the provision of community legal education, training, technical assistance, research, advice and counsel; co-counseling arrangements; or the use of private law firm facilities, libraries...

  20. KSC-06pd1204

    NASA Image and Video Library

    2006-06-23

    KENNEDY SPACE CENTER, FLA. - An overview of the new Firing Room 4 shows the expanse of computer stations and the various operations the facility will be able to manage. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis

  1. Administration of Computer Resources.

    ERIC Educational Resources Information Center

    Franklin, Gene F.

    Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…

  2. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  3. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...

  4. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X

  5. How You Can Protect Public Access Computers "and" Their Users

    ERIC Educational Resources Information Center

    Huang, Phil

    2007-01-01

    By providing the public with online computing facilities, librarians make available a world of information resources beyond their traditional print materials. Internet-connected computers in libraries greatly enhance the opportunity for patrons to enjoy the benefits of the digital age. Unfortunately, as hackers become more sophisticated and…

  6. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.

  7. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  8. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  9. Performance Predictions for Proposed ILS Facilities at St. Louis Municipal Airport

    DOT National Transportation Integrated Search

    1978-01-01

    The results of computer simulations of performance of proposed ILS facilities on Runway 12L/30R at St. Louis Municipal Airport (Lambert Field) are reported. These simulations indicate that an existing industrial complex located near the runway is com...

  10. User Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Biotechnology Facility (BTF) for ISS

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Engineering mockup shows the general arrangement of the plarned Biotechnology Facility inside an EXPRESS rack aboard the International Space Station. This layout includes a gas supply module (bottom left), control computer and laptop interface (bottom right), two rotating wall vessels (top right), and support systems.

  12. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  13. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  14. High-Performance Computing Data Center Efficiency Dashboard | Computational

    Science.gov Websites

    recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility

  15. Guide to computing at ANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peavler, J.

    1979-06-01

    This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)

  16. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1994-01-01

    The research conducted supported two facilities at NASA Ames Research Center: the Hypervelocity Free-Flight Aerodynamic Facility and the 16-Inch Shock Tunnel. During the grant period, a computerized film-reading system was developed, and five- and six-degree-of-freedom parameter-identification routines were written and successfully implemented. Studies of flow separation were conducted, and methods to extract phase shift information from finite-fringe interferograms were developed. Methods for constructing optical images from Computational Fluid Dynamics solutions were also developed, and these methods were used for one-to-one comparisons of experiment and computations.

  17. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  18. Matrix computations in MACSYMA

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1977-01-01

    Facilities built into MACSYMA for manipulating matrices with numeric or symbolic entries are described. Computations will be done exactly, keeping symbols as symbols. Topics discussed include how to form a matrix and create other matrices by transforming existing matrices within MACSYMA; arithmetic and other computation with matrices; and user control of computational processes through the use of optional variables. Two algorithms designed for sparse matrices are given. The computing times of several different ways to compute the determinant of a matrix are compared.

  19. Flow Characterization Studies of the 10-MW TP3 Arc-Jet Facility: Probe Sweeps

    NASA Technical Reports Server (NTRS)

    Goekcen, Tahir; Alunni, Antonella I.

    2016-01-01

    This paper reports computational simulations and analysis in support of calibration and flow characterization tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted in the NASA Ames 10-MW TP3 facility using flat-faced stagnation calorimeters at six conditions corresponding to the steps of a simulated flight heating profile. Data were obtained using a conical nozzle test configuration in which the models were placed in a free jet downstream of the nozzle. Experimental surveys of arc-jet test flow with pitot pressure and heat flux probes were also performed at these arc-heater conditions, providing assessment of the flow uniformity and valuable data for the flow characterization. Two different sets of pitot pressure and heat probes were used: 9.1-mm sphere-cone probes (nose radius of 4.57 mm or 0.18 in) with null-point heat flux gages, and 15.9-mm (0.625 in) diameter hemisphere probes with Gardon gages. The probe survey data clearly show that the test flow in the TP3 facility is not uniform at most conditions (not even axisymmetric at some conditions), and the extent of non-uniformity is highly dependent on various arc-jet parameters such as arc current, mass flow rate, and the amount of cold-gas injection at the arc-heater plenum. The present analysis comprises computational fluid dynamics simulations of the nonequilibrium flowfield in the facility nozzle and test box, including the models tested. Comparisons of computations with the experimental measurements show reasonably good agreement except at the extreme low pressure conditions of the facility envelope.

  20. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

Top