DOT National Transportation Integrated Search
2010-03-01
Transportation corridor-planning processes are well understood, and consensus exists among practitioners : about common practices for stages and tasks included in traditional EIS approaches. However, traditional approaches do : not typically employ f...
A Profile Approach to Self-Determination Theory Motivations at Work
ERIC Educational Resources Information Center
Moran, Christina M.; Diefendorff, James M.; Kim, Tae-Yeol; Liu, Zhi-Qiang
2012-01-01
Self-determination theory (SDT) posits the existence of distinct types of motivation (i.e., external, introjected, identified, integrated, and intrinsic). Research on these different types of motivation has typically adopted a variable-centered approach that seeks to understand how each motivation in isolation relates to employee outcomes. We…
A conditional probability approach using monitoring data to develop geographic-specific water quality criteria for protection of aquatic life is presented. Typical methods to develop criteria using existing monitoring data are limited by two issues: (1) how to extrapolate to an...
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
ERIC Educational Resources Information Center
Anderson, Cynthia M.; Smith, Tristram; Iovannone, Rose
2018-01-01
There is a large gap between research-based interventions for supporting children with autism spectrum disorder (ASD) and current practices implemented by educators to meet the needs of these children in typical school settings. Myriad reasons for this gap exist including the external validity of existing research, the complexity of ASD, and…
EULER-PCR: finishing experiments for repeat resolution.
Mulyukov, Zufar; Pevzner, Pavel A
2002-01-01
Genomic sequencing typically generates a large collection of unordered contigs or scaffolds. Contig ordering (also known as gap closure) is a non-trivial algorithmic and experimental problem since even relatively simple-to-assemble bacterial genomes typically result in large set of contigs. Neighboring contigs maybe separated either by gaps in read coverage or by repeats. In the later case we say that the contigs are separated by pseudogaps, and we emphasize the important difference between gap closure and pseudogap closure. The existing gap closure approaches do not distinguish between gaps and pseudogaps and treat them in the same way. We describe a new fast strategy for closing pseudogaps (repeat resolution). Since in highly repetitive genomes, the number of pseudogaps may exceed the number of gaps by an order of magnitude, this approach provides a significant advantage over the existing gap closure methods.
Alternative futures analysis is a scenario-based approach to regional land planning that attempts to synthesize existing scientific information in a format useful to community decision-makers. Typically, this approach attempts to investigate the impacts of several alternative set...
Ground robotic measurement of aeolian processes
USDA-ARS?s Scientific Manuscript database
Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These d...
Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.
1994-01-01
One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...
2018-04-05
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
What Good Are Statistics that Don't Generalize?
ERIC Educational Resources Information Center
Shaffer, David Williamson; Serlin, Ronald C.
2004-01-01
Quantitative and qualitative inquiry are sometimes portrayed as distinct and incompatible paradigms for research in education. Approaches to combining qualitative and quantitative research typically "integrate" the two methods by letting them co-exist independently within a single research study. Here we describe intra-sample statistical analysis…
Icing detection from geostationary satellite data using machine learning approaches
NASA Astrophysics Data System (ADS)
Lee, J.; Ha, S.; Sim, S.; Im, J.
2015-12-01
Icing can cause a significant structural damage to aircraft during flight, resulting in various aviation accidents. Icing studies have been typically performed using two approaches: one is a numerical model-based approach and the other is a remote sensing-based approach. The model based approach diagnoses aircraft icing using numerical atmospheric parameters such as temperature, relative humidity, and vertical thermodynamic structure. This approach tends to over-estimate icing according to the literature. The remote sensing-based approach typically uses meteorological satellite/ground sensor data such as Geostationary Operational Environmental Satellite (GOES) and Dual-Polarization radar data. This approach detects icing areas by applying thresholds to parameters such as liquid water path and cloud optical thickness derived from remote sensing data. In this study, we propose an aircraft icing detection approach which optimizes thresholds for L1B bands and/or Cloud Optical Thickness (COT) from Communication, Ocean and Meteorological Satellite-Meteorological Imager (COMS MI) and newly launched Himawari-8 Advanced Himawari Imager (AHI) over East Asia. The proposed approach uses machine learning algorithms including decision trees (DT) and random forest (RF) for optimizing thresholds of L1B data and/or COT. Pilot Reports (PIREPs) from South Korea and Japan were used as icing reference data. Results show that RF produced a lower false alarm rate (1.5%) and a higher overall accuracy (98.8%) than DT (8.5% and 75.3%), respectively. The RF-based approach was also compared with the existing COMS MI and GOES-R icing mask algorithms. The agreements of the proposed approach with the existing two algorithms were 89.2% and 45.5%, respectively. The lower agreement with the GOES-R algorithm was possibly due to the high uncertainty of the cloud phase product from COMS MI.
Van de Walle, P; Hallemans, A; Schwartz, M; Truijen, S; Gosselink, R; Desloovere, K
2012-02-01
Gait efficiency in children with cerebral palsy is usually quantified by metabolic energy expenditure. Mechanical energy estimations, however, can be a valuable supplement as they can be assessed during gait analysis and plotted over the gait cycle, thus revealing information on timing and sources of increases in energy expenditure. Unfortunately, little information on validity and sensitivity exists. Three mechanical estimation approaches: (1) centre of mass (CoM) approach, (2) sum of segmental energies (SSE) approach and (3) integrated joint power approach, were validated against oxygen consumption and each other. Sensitivity was assessed in typical gait and in children with diplegia. CoM approach underestimated total energy expenditure and showed poor sensitivity. SSE approach overestimated energy expenditure and showed acceptable sensitivity. Validity and sensitivity were best in the integrated joint power approach. This method is therefore preferred for mechanical energy estimation in children with diplegia. However, mechanical energy should supplement, not replace metabolic energy, as total energy expended is not captured in any mechanical approach. Copyright © 2011 Elsevier B.V. All rights reserved.
Is Education Getting Lost in University Mergers?
ERIC Educational Resources Information Center
Ursin, Jani; Aittola, Helena; Henderson, Charles; Valimaa, Jussi
2010-01-01
Mergers are common phenomena in higher education institutions. Improving educational quality is typically one of the stated goals of university mergers. Yet, little information exists about how merging institutions approach this goal. This paper presents results from a study of planning documents created prior to four mergers in the Finnish higher…
A Preliminary Study Exploring the Use of Fictional Narrative in Robotics Activities
ERIC Educational Resources Information Center
Williams, Douglas; Ma, Yuxin; Prejean, Louise
2010-01-01
Educational robotics activities are gaining in popularity. Though some research data suggest that educational robotics can be an effective approach in teaching mathematics, science, and engineering, research is needed to generate the best practices and strategies for designing these learning environments. Existing robotics activities typically do…
ERIC Educational Resources Information Center
Parlade, Meaghan V.; Iverson, Jana M.
2011-01-01
From a dynamic systems perspective, transition points in development are times of increased instability, during which behavioral patterns are susceptible to temporary decoupling. This study investigated the impact of the vocabulary spurt on existing patterns of communicative coordination. Eighteen typically developing infants were videotaped at…
Model-based Compositional Design of Networked Control Systems
2013-12-01
communication network. As described in Section 1.2, although there are several advantages in using NCS, there exist some drawbacks due to the presence of the... pendulum was used to demonstrate the approach, the results showed desirable performance in the presence of time delays. The passivity of the...certain level of performance under a wide range of failures, the designed controller is typically conservative. The drawback of this approach is that one
Web Image Search Re-ranking with Click-based Similarity and Typicality.
Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi
2016-07-20
In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.
Neal, S; Rice, F; Ng-Knight, T; Riglin, L; Frederickson, N
2016-07-01
School transition at around 11-years of age can be anxiety-provoking for children, particularly those with special educational needs (SEN). The present study adopted a longitudinal design to consider how existing transition strategies, categorized into cognitive, behavioral or systemic approaches, were associated with post-transition anxiety amongst 532 typically developing children and 89 children with SEN. Multiple regression analysis indicated that amongst typically developing pupils, systemic interventions were associated with lower school anxiety but not generalized anxiety, when controlling for prior anxiety. Results for children with SEN differed significantly, as illustrated by a Group × Intervention type interaction. Specifically, systemic strategies were associated with lower school anxiety amongst typically developing children and higher school anxiety amongst children with SEN. These findings highlight strategies that schools may find useful in supporting typically developing children over the transition period, whilst suggesting that children with SEN might need a more personalized approach. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Efficient Translation of LTL Formulae into Buchi Automata
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Lerda, Flavio
2001-01-01
Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.
A Goal Oriented Approach for Modeling and Analyzing Security Trade-Offs
NASA Astrophysics Data System (ADS)
Elahi, Golnaz; Yu, Eric
In designing software systems, security is typically only one design objective among many. It may compete with other objectives such as functionality, usability, and performance. Too often, security mechanisms such as firewalls, access control, or encryption are adopted without explicit recognition of competing design objectives and their origins in stakeholder interests. Recently, there is increasing acknowledgement that security is ultimately about trade-offs. One can only aim for "good enough" security, given the competing demands from many parties. In this paper, we examine how conceptual modeling can provide explicit and systematic support for analyzing security trade-offs. After considering the desirable criteria for conceptual modeling methods, we examine several existing approaches for dealing with security trade-offs. From analyzing the limitations of existing methods, we propose an extension to the i* framework for security trade-off analysis, taking advantage of its multi-agent and goal orientation. The method was applied to several case studies used to exemplify existing approaches.
A Data Augmentation Approach to Short Text Classification
ERIC Educational Resources Information Center
Rosario, Ryan Robert
2017-01-01
Text classification typically performs best with large training sets, but short texts are very common on the World Wide Web. Can we use resampling and data augmentation to construct larger texts using similar terms? Several current methods exist for working with short text that rely on using external data and contexts, or workarounds. Our focus is…
Development of Phonological Awareness in down Syndrome: A Meta-Analysis and Empirical Study
ERIC Educational Resources Information Center
Naess, Kari-Anne B.
2016-01-01
Phonological awareness (PA) is the knowledge and understanding of the sound structure of language and is believed to be an important skill for the development of reading. This study explored PA skills in children with Down syndrome and matched typically developing (TD) controls using a dual approach: a meta-analysis of the existing international…
CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Crist, Eric P.; Thelen, Brian J.; Carrara, David A.
1998-10-01
Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.
Exacerbation of pre-existing diabetes insipidus during pregnancy, mechanisms and management.
Tack, Lloyd J W; T'Sjoen, Guy; Lapauw, Bruno
2017-06-01
During pregnancy, physiological changes in osmotic homeostasis cause water retention. If excessive, this can cause gestational diabetes insipidus (DI), particularly in patients with already impaired vasopressin secretion. We present the case of a 34-year-old patient with pre-existing hypopituitarism who experienced a transient exacerbation of her DI during a twin pregnancy. In contrast to typical gestational DI, polyuria and polydipsia occurred during the first trimester and remained stable thereafter. This case highlights a challenging clinical entity of which pathophysiology, diagnostic approach and treatment will be discussed.
An Approach for Implementation of Project Management Information Systems
NASA Astrophysics Data System (ADS)
Běrziša, Solvita; Grabis, Jānis
Project management is governed by project management methodologies, standards, and other regulatory requirements. This chapter proposes an approach for implementing and configuring project management information systems according to requirements defined by these methodologies. The approach uses a project management specification framework to describe project management methodologies in a standardized manner. This specification is used to automatically configure the project management information system by applying appropriate transformation mechanisms. Development of the standardized framework is based on analysis of typical project management concepts and process and existing XML-based representations of project management. A demonstration example of project management information system's configuration is provided.
2002-12-19
High -Density Polyethylene HFCS High Fructose Corn Syrup HRC Hydrogen Release Compound HAS Hollow Stem...subsurface injection of a soluble electron donor solution (typically comprised of a carbohydrate such as molasses, whey, high fructose corn syrup (HFCS...whey, high fructose corn syrup (HFCS), glucose, lactate, butyrate, benzoate). Other approaches to enhanced anaerobic bioremediation exist, but
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
Application of zonal model on indoor air sensor network design
NASA Astrophysics Data System (ADS)
Chen, Y. Lisa; Wen, Jin
2007-04-01
Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Tiessen, Alex
Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojczyk, C.
The External Thermal and Moisture Management System (ETMMS), typically seen in deep energy retrofits, is a valuable approach for the roof-only portions of existing homes, particularly the 1 ½-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojczyk, C.
The External Thermal and Moisture Management System (ETMMS), typically seen in deep energy retrofits, is a valuable approach for the roof-only portions of existing homes, particularly the 1 1/2-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.
Curveslam: Utilizing Higher Level Structure In Stereo Vision-Based Navigation
2012-01-01
consider their applica- tion to SLAM . The work of [31] [32] develops a spline-based SLAM framework, but this is only for application to LIDAR -based SLAM ...Existing approaches to visual Simultaneous Localization and Mapping ( SLAM ) typically utilize points as visual feature primitives to represent landmarks...regions of interest. Further, previous SLAM techniques that propose the use of higher level structures often place constraints on the environment, such as
Treatment of category generation and retrieval in aphasia: Effect of typicality of category items.
Kiran, Swathi; Sandberg, Chaleece; Sebastian, Rajani
2011-01-01
Purpose: Kiran and colleagues (Kiran, 2007, 2008; Kiran & Johnson, 2008; Kiran & Thompson, 2003) have previously suggested that training atypical examples within a semantic category is a more efficient treatment approach to facilitating generalization within the category than training typical examples. The present study extended our previous work examining the notion of semantic complexity within goal-derived (ad-hoc) categories in individuals with aphasia. Methods: Six individuals with fluent aphasia (range = 39-84 years) and varying degrees of naming deficits and semantic impairments were involved. Thirty typical and atypical items each from two categories were selected after an extensive stimulus norming task. Generative naming for the two categories was tested during baseline and treatment. Results: As predicted, training atypical examples in the category resulted in generalization to untrained typical examples in five out the five patient-treatment conditions. In contrast, training typical examples (which was in examined three conditions) produced mixed results. One patient showed generalization to untrained atypical examples, whereas two patients did not show generalization to untrained atypical examples. Conclusions: Results of the present study supplement our existing data on the effect of a semantically based treatment for lexical retrieval by manipulating the typicality of category exemplars. PMID:21173393
Cazon, Aitor; Kelly, Sarah; Paterson, Abby M; Bibb, Richard J; Campbell, R Ian
2017-09-01
Rheumatoid arthritis is a chronic disease affecting the joints. Treatment can include immobilisation of the affected joint with a custom-fitting splint, which is typically fabricated by hand from low temperature thermoplastic, but the approach poses several limitations. This study focused on the evaluation, by finite element analysis, of additive manufacturing techniques for wrist splints in order to improve upon the typical splinting approach. An additive manufactured/3D printed splint, specifically designed to be built using Objet Connex multi-material technology and a virtual model of a typical splint, digitised from a real patient-specific splint using three-dimensional scanning, were modelled in computer-aided design software. Forty finite element analysis simulations were performed in flexion-extension and radial-ulnar wrist movements to compare the displacements and the stresses. Simulations have shown that for low severity loads, the additive manufacturing splint has 25%, 76% and 27% less displacement in the main loading direction than the typical splint in flexion, extension and radial, respectively, while ulnar values were 75% lower in the traditional splint. For higher severity loads, the flexion and extension movements resulted in deflections that were 24% and 60%, respectively, lower in the additive manufacturing splint. However, for higher severity loading, the radial defection values were very similar in both splints and ulnar movement deflection was higher in the additive manufacturing splint. A physical prototype of the additive manufacturing splint was also manufactured and was tested under normal conditions to validate the finite element analysis data. Results from static tests showed maximum displacements of 3.46, 0.97, 3.53 and 2.51 mm flexion, extension, radial and ulnar directions, respectively. According to these results, the present research argues that from a technical point of view, the additive manufacturing splint design stands at the same or even better level of performance in displacements and stress values in comparison to the typical low temperature thermoplastic approach and is therefore a feasible approach to splint design and manufacture.
Body Fat Percentage Prediction Using Intelligent Hybrid Approaches
Shao, Yuehjen E.
2014-01-01
Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804
Toward Dietary Assessment via Mobile Phone Video Cameras.
Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce
2010-11-13
Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.
Effective behavioral modeling and prediction even when few exemplars are available
NASA Astrophysics Data System (ADS)
Goan, Terrance; Kartha, Neelakantan; Kaneshiro, Ryan
2006-05-01
While great progress has been made in the lowest levels of data fusion, practical advances in behavior modeling and prediction remain elusive. The most critical limitation of existing approaches is their inability to support the required knowledge modeling and continuing refinement under realistic constraints (e.g., few historic exemplars, the lack of knowledge engineering support, and the need for rapid system deployment). This paper reports on our ongoing efforts to develop Propheteer, a system which will address these shortcomings through two primary techniques. First, with Propheteer we abandon the typical consensus-driven modeling approaches that involve infrequent group decision making sessions in favor of an approach that solicits asynchronous knowledge contributions (in the form of alternative future scenarios and indicators) without burdening the user with endless certainty or probability estimates. Second, we enable knowledge contributions by personnel beyond the typical core decision making group, thereby casting light on blind spots, mitigating human biases, and helping maintain the currency of the developed behavior models. We conclude with a discussion of the many lessons learned in the development of our prototype Propheteer system.
2010-04-21
paraffins , olefins , and aromatics 3 . Although the sulfur concentration specification for RP-1 was set at 500 ppm (mass/mass), the typical as-delivered...hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information
An update of commercial infrared sensing and imaging instruments
NASA Technical Reports Server (NTRS)
Kaplan, Herbert
1989-01-01
A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.
Multilingual Sentiment Analysis: State of the Art and Independent Comparison of Techniques.
Dashtipour, Kia; Poria, Soujanya; Hussain, Amir; Cambria, Erik; Hawalah, Ahmad Y A; Gelbukh, Alexander; Zhou, Qiang
With the advent of Internet, people actively express their opinions about products, services, events, political parties, etc., in social media, blogs, and website comments. The amount of research work on sentiment analysis is growing explosively. However, the majority of research efforts are devoted to English-language data, while a great share of information is available in other languages. We present a state-of-the-art review on multilingual sentiment analysis. More importantly, we compare our own implementation of existing approaches on common data. Precision observed in our experiments is typically lower than the one reported by the original authors, which we attribute to the lack of detail in the original presentation of those approaches. Thus, we compare the existing works by what they really offer to the reader, including whether they allow for accurate implementation and for reliable reproduction of the reported results.
Connecting Architecture and Implementation
NASA Astrophysics Data System (ADS)
Buchgeher, Georg; Weinreich, Rainer
Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
Parker, Andrew M.; Stone, Eric R.
2013-01-01
One of the most common findings in behavioral decision research is that people have unrealistic beliefs about how much they know. However, demonstrating that misplaced confidence exists does not necessarily mean that there are costs to it. This paper contrasts two approaches toward answering whether misplaced confidence is good or bad, which we have labeled the overconfidence and unjustified confidence approach. We first consider conceptual and analytic issues distinguishing these approaches. Then, we provide findings from a set of simulations designed to determine when the approaches produce different conclusions across a range of possible confidence-knowledge-outcome relationships. Finally, we illustrate the main findings from the simulations with three empirical examples drawn from our own data. We conclude that the unjustified confidence approach is typically the preferred approach, both because it is appropriate for testing a larger set of psychological mechanisms as well as for methodological reasons. PMID:25309037
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
Bioinformatics approaches to predict target genes from transcription factor binding data.
Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael
2017-12-01
Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Retrieval of all effective susceptibilities in nonlinear metamaterials
NASA Astrophysics Data System (ADS)
Larouche, Stéphane; Radisic, Vesna
2018-04-01
Electromagnetic metamaterials offer a great avenue to engineer and amplify the nonlinear response of materials. Their electric, magnetic, and magnetoelectric linear and nonlinear response are related to their structure, providing unprecedented liberty to control those properties. Both the linear and the nonlinear properties of metamaterials are typically anisotropic. While the methods to retrieve the effective linear properties are well established, existing nonlinear retrieval methods have serious limitations. In this work, we generalize a nonlinear transfer matrix approach to account for all nonlinear susceptibility terms and show how to use this approach to retrieve all effective nonlinear susceptibilities of metamaterial elements. The approach is demonstrated using sum frequency generation, but can be applied to other second-order or higher-order processes.
How to teach artificial organs.
Zapanta, Conrad M; Borovetz, Harvey S; Lysaght, Michael J; Manning, Keefe B
2011-01-01
Artificial organs education is often an overlooked field for many bioengineering and biomedical engineering students. The purpose of this article is to describe three different approaches to teaching artificial organs. This article can serve as a reference for those who wish to offer a similar course at their own institutions or incorporate these ideas into existing courses. Artificial organ classes typically fulfill several ABET (Accreditation Board for Engineering and Technology) criteria, including those specific to bioengineering and biomedical engineering programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The External Thermal and Moisture Management System (ETMMS), typically seen in deep energy retrofits, is a valuable approach for the roof-only portions of existing homes, particularly the 1 1/2-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.
1989-07-01
July 1989 Copyright @ 1989 Carnegie Mellon University ’Visiting Professor, Dpto Ingenieria Eldctrica, Electr6nica y Control, UNED, Ciudad Universitaria...signals typically utilized in existing industrial and research robots are parabolic trajectories of order at least two. This is because the desired...Discretos de Control Multivariables. Ph.D. Thesis, E.TS.I. Industriales of Universidad Politicnica of Madrid. September 1982. [14] Craig, J.J
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N; Kim, Yunseok
2016-07-28
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response.
A variable mixing-length ratio for convection theory
NASA Technical Reports Server (NTRS)
Chan, K. L.; Wolff, C. L.; Sofia, S.
1981-01-01
It is argued that a natural choice for the local mixing length in the mixing-length theory of convection has a value proportional to the local density scale height of the convective bubbles. The resultant variable mixing-length ratio (the ratio between the mixing length and the pressure scale height) of this theory is enhanced in the superadiabatic region and approaches a constant in deeper layers. Numerical tests comparing the new mixing length successfully eliminate most of the density inversion that typically plagues conventional results. The new approach also seems to indicate the existence of granular motion at the top of the convection zone.
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N.; Kim, Yunseok
2016-01-01
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response. PMID:27466086
NASA Astrophysics Data System (ADS)
Zhao, Yingru; Chen, Jincan
A theoretical modeling approach is presented, which describes the behavior of a typical fuel cell-heat engine hybrid system in steady-state operating condition based on an existing solid oxide fuel cell model, to provide useful fundamental design characteristics as well as potential critical problems. The different sources of irreversible losses, such as the electrochemical reaction, electric resistances, finite-rate heat transfer between the fuel cell and the heat engine, and heat-leak from the fuel cell to the environment are specified and investigated. Energy and entropy analyses are used to indicate the multi-irreversible losses and to assess the work potentials of the hybrid system. Expressions for the power output and efficiency of the hybrid system are derived and the performance characteristics of the system are presented and discussed in detail. The effects of the design parameters and operating conditions on the system performance are studied numerically. It is found that there exist certain optimum criteria for some important parameters. The results obtained here may provide a theoretical basis for both the optimal design and operation of real fuel cell-heat engine hybrid systems. This new approach can be easily extended to other fuel cell hybrid systems to develop irreversible models suitable for the investigation and optimization of similar energy conversion settings and electrochemistry systems.
Inferring the unobserved chemical state of the atmosphere: idealized data assimilation experiments
NASA Astrophysics Data System (ADS)
Knote, C. J.; Barré, J.; Eckl, M.; Hornbrook, R. S.; Wiedinmyer, C.; Emmons, L. K.; Orlando, J. J.; Tyndall, G. S.; Arellano, A. F.
2015-12-01
Chemical data assimilation in numerical models of the atmosphere is a venture into uncharted territory, into a world populated by a vast zoo of chemical compounds with strongly non-linear interactions. Commonly assimilated observations exist for only a selected few of those key gas phase compounds (CO, O3, NO2), and assimilating those in models assuming linearity begs the question of: To what extent we can infer the remainder to create a new state of the atmosphere that is chemically sound and optimal? In our work we present the first systematic investigation of sensitivities that exist between chemical compounds under varying ambient conditions in order to inform scientists on the potential pitfalls when assimilating single/few chemical compounds into complex 3D chemistry transport models. In order to do this, we developed a box-modeling tool (BOXMOX) based on the Kinetic PreProcessor (KPP, http://people.cs.vt.edu/~asandu/Software/Kpp/) in which we can conduct simulations with a suite of 'mechanisms', sets of differential equations describing atmospheric photochemistry. The box modeling approach allows us to sample a large variety of atmospheric conditions (urban, rural, biogenically dominated, biomass burning plumes) to capture the range of chemical conditions that typically exist in the atmosphere. Included in our suite are 'lumped' mechanisms typically used in regional and global chemistry transport models (MOZART, RACM, RADM2, SAPRC99, CB05, CBMZ) as well as the Master Chemical Mechanism (MCM, U. Leeds). We will use an Observing System Simulation Experiment approach with the MCM prediction as 'nature' or 'true' state, assimilating idealized synthetic observations (from MCM) into the different ‚lumped' mechanisms under various environments. Two approaches to estimate the sensitivity of the chemical system will be compared: 1) adjoint: using Jacobians computed by KPP and 2) ensemble: by perturbing emissions, temperature, photolysis rates, entrainment, etc., in order to create gain matrices to infer the unobserved part of the photochemical system.
Gates, Kathleen M.; Molenaar, Peter C. M.; Iyer, Swathi P.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
Clinical investigations of many neuropsychiatric disorders rely on the assumption that diagnostic categories and typical control samples each have within-group homogeneity. However, research using human neuroimaging has revealed that much heterogeneity exists across individuals in both clinical and control samples. This reality necessitates that researchers identify and organize the potentially varied patterns of brain physiology. We introduce an analytical approach for arriving at subgroups of individuals based entirely on their brain physiology. The method begins with Group Iterative Multiple Model Estimation (GIMME) to assess individual directed functional connectivity maps. GIMME is one of the only methods to date that can recover both the direction and presence of directed functional connectivity maps in heterogeneous data, making it an ideal place to start since it addresses the problem of heterogeneity. Individuals are then grouped based on similarities in their connectivity patterns using a modularity approach for community detection. Monte Carlo simulations demonstrate that using GIMME in combination with the modularity algorithm works exceptionally well - on average over 97% of simulated individuals are placed in the accurate subgroup with no prior information on functional architecture or group identity. Having demonstrated reliability, we examine resting-state data of fronto-parietal regions drawn from a sample (N = 80) of typically developing and attention-deficit/hyperactivity disorder (ADHD) -diagnosed children. Here, we find 5 subgroups. Two subgroups were predominantly comprised of ADHD, suggesting that more than one biological marker exists that can be used to identify children with ADHD based from their brain physiology. Empirical evidence presented here supports notions that heterogeneity exists in brain physiology within ADHD and control samples. This type of information gained from the approach presented here can assist in better characterizing patients in terms of outcomes, optimal treatment strategies, potential gene-environment interactions, and the use of biological phenomenon to assist with mental health. PMID:24642753
Existing Whole-House Solutions Case Study: Retrofitting a 1960s Split-Level Cold-Climate Home
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puttagunta, S.
2015-08-01
National programs such as Home Performance with ENERGY STAR® and numerous other utility air sealing programs have brought awareness to homeowners of the benefits of energy efficiency retrofits. Yet, these programs tend to focus on the low-hanging fruit: air-sealing the thermal envelope and ductwork where accessible, switch to efficient lighting, and low-flow fixtures. At the other end of the spectrum, deep-energy retrofit programs are also being encouraged by various utilities across the country. While deep energy retrofits typically seek 50% energy savings, they are often quite costly and most applicable to gut-rehab projects. A significant potential for lowering energy usagemore » in existing homes lies between the low hanging fruit and deep energy retrofit approaches - retrofits that save approximately 30% in energy over the existing conditions.« less
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Kramer, Lynda J.; Arthur, Trey; Parrish, Russell V.; Barry, John S.
2003-01-01
Limited visibility is the single most critical factor affecting the safety and capacity of worldwide aviation operations. Synthetic Vision Systems (SVS) technology can solve this visibility problem with a visibility solution. These displays employ computer-generated terrain imagery to present 3D, perspective out-the-window scenes with sufficient information and realism to enable operations equivalent to those of a bright, clear day, regardless of weather conditions. To introduce SVS display technology into as many existing aircraft as possible, a retrofit approach was defined that employs existing HDD display capabilities for glass cockpits and HUD capabilities for the other aircraft. This retrofit approach was evaluated for typical nighttime airline operations at a major international airport. Overall, 6 evaluation pilots performed 75 research approaches, accumulating 18 hours flight time evaluating SVS display concepts that used the NASA LaRC's Boeing B-757-200 aircraft at Dallas/Fort Worth International Airport. Results from this flight test establish the SVS retrofit concept, regardless of display size, as viable for tested conditions. Future assessments need to extend evaluation of the approach to operations in an appropriate, terrain-challenged environment with daytime test conditions.
A geometric approach to non-linear correlations with intrinsic scatter
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2017-12-01
We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.
Implementation of Instrumental Variable Bounds for Data Missing Not at Random.
Marden, Jessica R; Wang, Linbo; Tchetgen, Eric J Tchetgen; Walter, Stefan; Glymour, M Maria; Wirth, Kathleen E
2018-05-01
Instrumental variables are routinely used to recover a consistent estimator of an exposure causal effect in the presence of unmeasured confounding. Instrumental variable approaches to account for nonignorable missing data also exist but are less familiar to epidemiologists. Like instrumental variables for exposure causal effects, instrumental variables for missing data rely on exclusion restriction and instrumental variable relevance assumptions. Yet these two conditions alone are insufficient for point identification. For estimation, researchers have invoked a third assumption, typically involving fairly restrictive parametric constraints. Inferences can be sensitive to these parametric assumptions, which are typically not empirically testable. The purpose of our article is to discuss another approach for leveraging a valid instrumental variable. Although the approach is insufficient for nonparametric identification, it can nonetheless provide informative inferences about the presence, direction, and magnitude of selection bias, without invoking a third untestable parametric assumption. An important contribution of this article is an Excel spreadsheet tool that can be used to obtain empirical evidence of selection bias and calculate bounds and corresponding Bayesian 95% credible intervals for a nonidentifiable population proportion. For illustrative purposes, we used the spreadsheet tool to analyze HIV prevalence data collected by the 2007 Zambia Demographic and Health Survey (DHS).
Fiber Contraction Approaches for Improving CMC Proportional Limit
NASA Technical Reports Server (NTRS)
DiCarlo, James A.; Yun, Hee Mann
1997-01-01
The fact that the service life of ceramic matrix composites (CMC) decreases dramatically for stresses above the CMC proportional limit has triggered a variety of research activities to develop microstructural approaches that can significantly improve this limit. As discussed in a previous report, both local and global approaches exist for hindering the propagation of cracks through the CMC matrix, the physical source for the proportional limit. Local approaches include: (1) minimizing fiber diameter and matrix modulus; (2) maximizing fiber volume fraction, fiber modulus, and matrix toughness; and (3) optimizing fiber-matrix interfacial shear strength; all of which should reduce the stress concentration at the tip of cracks pre existing or created in the matrix during CMC service. Global approaches, as with pre-stressed concrete, center on seeking mechanisms for utilizing the reinforcing fiber to subject the matrix to in-situ compressive stresses which will remain stable during CMC service. Demonstrated CMC examples for the viability of this residual stress approach are based on strain mismatches between the fiber and matrix in their free states, such as, thermal expansion mismatch and creep mismatch. However, these particular mismatch approaches are application limited in that the residual stresses from expansion mismatch are optimum only at low CMC service temperatures and the residual stresses from creep mismatch are typically unidirectional and difficult to implement in complex-shaped CMC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...
2017-10-28
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Short, Hilary; Stafinski, Tania; Menon, Devidas
2015-05-01
Regardless of the type of health system or payer, coverage decisions on drugs for rare diseases (DRDs) are challenging. While these drugs typically represent the only active treatment option for a progressive and/or life-threatening condition, evidence of clinical benefit is often limited because of small patient populations and the costs are high. Thus, decisions come with considerable uncertainty and risk. In Canada, interest in developing a pan-Canadian decision-making approach informed by international experiences exists. To develop an inventory of existing policies and processes for making coverage decisions on DRDs around the world. A systematic review of published and unpublished documents describing current policies and processes in the top 20 gross domestic product countries was conducted. Bibliographic databases, the Internet and government/health technology assessment organization websites in each country were searched. Two researchers independently extracted information and tabulated it to facilitate qualitative comparative analyses. Policy experts from each country were contacted and asked to review the information collected for accuracy and completeness. Almost all countries have multiple mechanisms through which coverage for a DRD may be sought. However, they typically begin with a review that follows the same process as drugs for more common conditions (i.e., the centralized review process), although specific submission requirements could differ (e.g., no need to submit a cost-effectiveness analysis). When drugs fail to receive a positive recommendation/decision, they are reconsidered by "safety net"-type programs. Eligibility criteria vary across countries, as do the decision options, which may be applied to individual patients or patient groups. With few exceptions, countries have not created separate centralized review processes for DRDs. Instead, they have modified components of existing mechanisms and added safety nets. Copyright © 2015 Longwoods Publishing.
Understanding the public's health problems: applications of symbolic interaction to public health.
Maycock, Bruce
2015-01-01
Public health has typically investigated health issues using methods from the positivistic paradigm. Yet these approaches, although they are able to quantify the problem, may not be able to explain the social reasons of why the problem exists or the impact on those affected. This article will provide a brief overview of a sociological theory that provides methods and a theoretical framework that has proven useful in understanding public health problems and developing interventions. © 2014 APJPH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less
Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; ...
2016-07-28
Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less
How Sensory Experiences of Children With and Without Autism Affect Family Occupations
Bagby, Molly Shields; Dickie, Virginia A.; Baranek, Grace T.
2012-01-01
We used a grounded theory approach to data analysis to discover what effect, if any, children's sensory experiences have on family occupations. We chose this approach because the existing literature does not provide a theory to account for the effect of children's sensory experiences on family occupations. Parents of six children who were typically developing and six children who had autism were interviewed. We analyzed the data using open, axial, and selective coding techniques. Children's sensory experiences affect family occupations in three ways: (1) what a family chooses to do or not do; (2) how the family prepares; and (3) the extent to which experiences, meaning, and feelings are shared. PMID:22389942
Modeling evaporation from spent nuclear fuel storage pools: A diffusion approach
NASA Astrophysics Data System (ADS)
Hugo, Bruce Robert
Accurate prediction of evaporative losses from light water reactor nuclear power plant (NPP) spent fuel storage pools (SFPs) is important for activities ranging from sizing of water makeup systems during NPP design to predicting the time available to supply emergency makeup water following severe accidents. Existing correlations for predicting evaporation from water surfaces are only optimized for conditions typical of swimming pools. This new approach modeling evaporation as a diffusion process has yielded an evaporation rate model that provided a better fit of published high temperature evaporation data and measurements from two SFPs than other published evaporation correlations. Insights from treating evaporation as a diffusion process include correcting for the effects of air flow and solutes on evaporation rate. An accurate modeling of the effects of air flow on evaporation rate is required to explain the observed temperature data from the Fukushima Daiichi Unit 4 SFP during the 2011 loss of cooling event; the diffusion model of evaporation provides a significantly better fit to this data than existing evaporation models.
Strategy Guideline: Quality Management in Existing Homes; Cantilever Floor Example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taggart, J.; Sikora, J.; Wiehagen, J.
2011-12-01
This guideline is designed to highlight the QA process that can be applied to any residential building retrofit activity. The cantilevered floor retrofit detailed in this guideline is included only to provide an actual retrofit example to better illustrate the QA activities being presented. The goal of existing home high performing remodeling quality management systems (HPR-QMS) is to establish practices and processes that can be used throughout any remodeling project. The research presented in this document provides a comparison of a selected retrofit activity as typically done versus that same retrofit activity approached from an integrated high performance remodeling andmore » quality management perspective. It highlights some key quality management tools and approaches that can be adopted incrementally by a high performance remodeler for this or any high performance retrofit. This example is intended as a template and establishes a methodology that can be used to develop a portfolio of high performance remodeling strategies.« less
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
The Data Fusion Modeling (DFM) approach has been used to develop a groundwater flow and transport model of the Old Burial Grounds (OBG) at the US Department of Energy`s Savannah River Site (SRS). The resulting DFM model was compared to an existing model that was calibrated via the typical trial-and-error method. The OBG was chosen because a substantial amount of hydrogeologic information is available, a FACT (derivative of VAM3DCG) flow and transport model of the site exists, and the calibration and numerics were challenging with standard approaches. The DFM flow model developed here is similar to the flow model bymore » Flach et al. This allows comparison of the two flow models and validates the utility of DFM. The contaminant of interest for this study is tritium, because it is a geochemically conservative tracer that has been monitored along the seepline near the F-Area effluent and Fourmile Branch for several years.« less
Population genetic testing for cancer susceptibility: founder mutations to genomes.
Foulkes, William D; Knoppers, Bartha Maria; Turnbull, Clare
2016-01-01
The current standard model for identifying carriers of high-risk mutations in cancer-susceptibility genes (CSGs) generally involves a process that is not amenable to population-based testing: access to genetic tests is typically regulated by health-care providers on the basis of a labour-intensive assessment of an individual's personal and family history of cancer, with face-to-face genetic counselling performed before mutation testing. Several studies have shown that application of these selection criteria results in a substantial proportion of mutation carriers being missed. Population-based genetic testing has been proposed as an alternative approach to determining cancer susceptibility, and aims for a more-comprehensive detection of mutation carriers. Herein, we review the existing data on population-based genetic testing, and consider some of the barriers, pitfalls, and challenges related to the possible expansion of this approach. We consider mechanisms by which population-based genetic testing for cancer susceptibility could be delivered, and suggest how such genetic testing might be integrated into existing and emerging health-care structures. The existing models of genetic testing (including issues relating to informed consent) will very likely require considerable alteration if the potential benefits of population-based genetic testing are to be fully realized.
HIV prevention and the two faces of partner notification.
Bayer, R; Toomey, K E
1992-01-01
In the cases of medical patients with sexually transmitted diseases (particularly those with the human immunodeficiency virus), two distinct approaches exist to notifying sexual and/or needle-sharing partners of possible risk. Each approach has its own history (including unique practical problems of implementation) and provokes its own ethical dilemmas. The first approach--the moral "duty to warn"--arose out of clinical situations in which a physician knew the identity of a person deemed to be at risk. The second approach--that of contact tracing--emerged from sexually transmitted disease control programs in which the clinician typically did not know the identity of those who might have been exposed. Confusion between the two approaches has led many to mistake processes that are fundamentally voluntary as mandatory and those that respect confidentiality as invasive of privacy. In the context of the AIDS epidemic and the vicissitudes of the two approaches, we describe the complex problems of partner notification and underscore the ethical and political contexts within which policy decisions have been made. PMID:1304728
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
NASA Technical Reports Server (NTRS)
Benardini, James N.; Koukol, Robert C.; Schubert, Wayne W.; Morales, Fabian; Klatte, Marlin F.
2012-01-01
A report describes an adaptation of a filter assembly to enable it to be used to filter out microorganisms from a propulsion system. The filter assembly has previously been used for particulates greater than 2 micrometers. Projects that utilize large volumes of nonmetallic materials of planetary protection concern pose a challenge to their bioburden budget, as a conservative specification value of 30 spores per cubic centimeter is typically used. Helium was collected utilizing an adapted filtration approach employing an existing Millipore filter assembly apparatus used by the propulsion team for particulate analysis. The filter holder on the assembly has a 47-mm diameter, and typically a 1.2-5 micrometer pore-size filter is used for particulate analysis making it compatible with commercially available sterilization filters (0.22 micrometers) that are necessary for biological sampling. This adaptation to an existing technology provides a proof-of-concept and a demonstration of successful use in a ground equipment system. This adaptation has demonstrated that the Millipore filter assembly can be utilized to filter out microorganisms from a propulsion system, whereas in previous uses the filter assembly was utilized for particulates greater than 2 micrometers.
NASA Astrophysics Data System (ADS)
Yin, J. J.; Chang, F.; Li, S. L.; Yao, X. L.; Sun, J. R.; Xiao, Y.
2017-12-01
To clarify the evolution of damage for typical carbon woven fabric/epoxy laminates exposed to lightning strike, artificial lightning testing on carbon woven fabric/epoxy laminates were conducted, damage was assessed using visual inspection and damage peeling approaches. Relationships between damage size and action integral were also elucidated. Results showed that damage appearance of carbon woven fabric/epoxy laminate presents circular distribution, and center of the circle located at the lightning attachment point approximately, there exist no damage projected area dislocations for different layers, visual damage territory represents maximum damage scope; visible damage can be categorized into five modes: resin ablation, fiber fracture and sublimation, delamination, ablation scallops and block-shaped ply-lift; delamination damage due to resin pyrolysis and internal pressure exist obvious distinguish; project area of total damage is linear with action integral for the same type specimens, that of resin ablation damage is linear with action integral, but no correlation with specimen type, for all specimens, damage depth is linear with logarithm of action integral. The coupled thermal-electrical model constructed is capable to simulate the ablation damage for carbon woven fabric/epoxy laminates exposed to simulated lightning current through experimental verification.
Lim, Regine M; Silver, Ari J; Silver, Maxwell J; Borroto, Carlos; Spurrier, Brett; Petrossian, Tanya C; Larson, Jessica L; Silver, Lee M
2016-02-01
Carrier screening for mutations contributing to cystic fibrosis (CF) is typically accomplished with panels composed of variants that are clinically validated primarily in patients of European descent. This approach has created a static genetic and phenotypic profile for CF. An opportunity now exists to reevaluate the disease profile of CFTR at a global population level. CFTR allele and genotype frequencies were obtained from a nonpatient cohort with more than 60,000 unrelated personal genomes collected by the Exome Aggregation Consortium. Likely disease-contributing mutations were identified with the use of public database annotations and computational tools. We identified 131 previously described and likely pathogenic variants and another 210 untested variants with a high probability of causing protein damage. None of the current genetic screening panels or existing CFTR mutation databases covered a majority of deleterious variants in any geographical population outside of Europe. Both clinical annotation and mutation coverage by commercially available targeted screening panels for CF are strongly biased toward detection of reproductive risk in persons of European descent. South and East Asian populations are severely underrepresented, in part because of a definition of disease that preferences the phenotype associated with European-typical CFTR alleles.
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Learning patterns of life from intelligence analyst chat
NASA Astrophysics Data System (ADS)
Schneider, Michael K.; Alford, Mark; Babko-Malaya, Olga; Blasch, Erik; Chen, Lingji; Crespi, Valentino; HandUber, Jason; Haney, Phil; Nagy, Jim; Richman, Mike; Von Pless, Gregory; Zhu, Howie; Rhodes, Bradley J.
2016-05-01
Our Multi-INT Data Association Tool (MIDAT) learns patterns of life (POL) of a geographical area from video analyst observations called out in textual reporting. Typical approaches to learning POLs from video make use of computer vision algorithms to extract locations in space and time of various activities. Such approaches are subject to the detection and tracking performance of the video processing algorithms. Numerous examples of human analysts monitoring live video streams annotating or "calling out" relevant entities and activities exist, such as security analysis, crime-scene forensics, news reports, and sports commentary. This user description typically corresponds with textual capture, such as chat. Although the purpose of these text products is primarily to describe events as they happen, organizations typically archive the reports for extended periods. This archive provides a basis to build POLs. Such POLs are useful for diagnosis to assess activities in an area based on historical context, and for consumers of products, who gain an understanding of historical patterns. MIDAT combines natural language processing, multi-hypothesis tracking, and Multi-INT Activity Pattern Learning and Exploitation (MAPLE) technologies in an end-to-end lab prototype that processes textual products produced by video analysts, infers POLs, and highlights anomalies relative to those POLs with links to "tracks" of related activities performed by the same entity. MIDAT technologies perform well, achieving, for example, a 90% F1-value on extracting activities from the textual reports.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-09-25
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-01-01
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems.
Rodriguez Arreola, Alberto; Balsamo, Domenico; Merrett, Geoff V; Weddell, Alex S
2018-01-10
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor's state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I 2 C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This case study describes the External Thermal and Moisture Management System developed by the NorthernSTAR Building America Partnership. This system is typically used in deep energy retrofits and is a valuable approach for the roof-only portions of existing homes, particularly the 1 1/2-story home. It is effective in reducing energy loss through the building envelope, improving building durability, reducing ice dams, and providing opportunities to improve occupant comfort and health.
Teleoperator technology and system development, volume 1
NASA Technical Reports Server (NTRS)
1972-01-01
A two phase approach was undertaken to: (1) evaluate the performance of a general-purpose anthropomorphic manipulator with various controllers and display arrangements, (2) identify basic technical limitations of existing teleoperator designs, and associated controls and displays, and (3) identify, through experimentation, the effects that controls and displays have on the performance of an anthropomorphic manipulator. In Phase 1 the NASA-furnished manipulators, controls and displays were integrated with the remote maneuvering unit; in Phase 2 experiments were defined and performed to assess the utility of teleoperators for 6 typical space inspection, maintenance and repair tasks.
A new model-independent approach for finding the arrival direction of an extensive air shower
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedayati, H. Kh., E-mail: hedayati@kntu.ac.ir
2016-11-01
A new accurate method for reconstructing the arrival direction of an extensive air shower (EAS) is described. Compared to existing methods, it is not subject to minimization of a function and, therefore, is fast and stable. This method also does not need to know detailed curvature or thickness structure of an EAS. It can have angular resolution of about 1 degree for a typical surface array in central regions. Also, it has better angular resolution than other methods in the marginal area of arrays.
Flight evaluation of LORAN-C in the State of Vermont
NASA Technical Reports Server (NTRS)
Mackenzie, F. D.; Lytle, C. D.
1981-01-01
A flight evaluation of LORAN C as a supplement to existing navigation aids for general aviation aircraft, particularly in mountainous regions of the United States and where VOR coverage is limited was conducted. Flights, initiated in the summer months, extend through four seasons and practically all weather conditions typical of northeastern U.S. operations. Assessment of all the data available indicates that LORAN C signals are suitable as a means of navigation during enroute, terminal and nonprecision approach operations and the performance exceeds the minimum accuracy criteria.
Intacs for early pellucid marginal degeneration.
Kymionis, George D; Aslanides, Ioannis M; Siganos, Charalambos S; Pallikaris, Ioannis G
2004-01-01
A 42-year-old man had Intacs (Addition Technology Inc.) implantation for early pellucid marginal degeneration (PMD). Two Intacs segments (0.45 mm thickness) were inserted uneventfully in the fashion typically used for low myopia correction (nasal-temporal). Eleven months after the procedure, the uncorrected visual acuity was 20/200, compared with counting fingers preoperatively, while the best spectacle-corrected visual acuity improved to 20/25 from 20/50. Corneal topographic pattern also improved. Although the results are encouraging, concern still exists regarding the long-term effect of this approach for the management of patients with PMD.
Initiating an undiagnosed diseases program in the Western Australian public health system.
Baynam, Gareth; Broley, Stephanie; Bauskis, Alicia; Pachter, Nicholas; McKenzie, Fiona; Townshend, Sharron; Slee, Jennie; Kiraly-Borri, Cathy; Vasudevan, Anand; Hawkins, Anne; Schofield, Lyn; Helmholz, Petra; Palmer, Richard; Kung, Stefanie; Walker, Caroline E; Molster, Caron; Lewis, Barry; Mina, Kym; Beilby, John; Pathak, Gargi; Poulton, Cathryn; Groza, Tudor; Zankl, Andreas; Roscioli, Tony; Dinger, Marcel E; Mattick, John S; Gahl, William; Groft, Stephen; Tifft, Cynthia; Taruscio, Domenica; Lasko, Paul; Kosaki, Kenjiro; Wilhelm, Helene; Melegh, Bela; Carapetis, Jonathan; Jana, Sayanta; Chaney, Gervase; Johns, Allison; Owen, Peter Wynn; Daly, Frank; Weeramanthri, Tarun; Dawkins, Hugh; Goldblatt, Jack
2017-05-03
New approaches are required to address the needs of complex undiagnosed diseases patients. These approaches include clinical genomic diagnostic pipelines, utilizing intra- and multi-disciplinary platforms, as well as specialty-specific genomic clinics. Both are advancing diagnostic rates. However, complementary cross-disciplinary approaches are also critical to address those patients with multisystem disorders who traverse the bounds of multiple specialties and remain undiagnosed despite existing intra-specialty and genomic-focused approaches. The diagnostic possibilities of undiagnosed diseases include genetic and non-genetic conditions. The focus on genetic diseases addresses some of these disorders, however a cross-disciplinary approach is needed that also simultaneously addresses other disorder types. Herein, we describe the initiation and summary outcomes of a public health system approach for complex undiagnosed patients - the Undiagnosed Diseases Program-Western Australia (UDP-WA). Briefly the UDP-WA is: i) one of a complementary suite of approaches that is being delivered within health service, and with community engagement, to address the needs of those with severe undiagnosed diseases; ii) delivered within a public health system to support equitable access to health care, including for those from remote and regional areas; iii) providing diagnoses and improved patient care; iv) delivering a platform for in-service and real time genomic and phenomic education for clinicians that traverses a diverse range of specialties; v) retaining and recapturing clinical expertise; vi) supporting the education of junior and more senior medical staff; vii) designed to integrate with clinical translational research; and viii) is supporting greater connectedness for patients, families and medical staff. The UDP-WA has been initiated in the public health system to complement existing clinical genomic approaches; it has been targeted to those with a specific diagnostic need, and initiated by redirecting existing clinical and financial resources. The UDP-WA supports the provision of equitable and sustainable diagnostics and simultaneously supports capacity building in clinical care and translational research, for those with undiagnosed, typically rare, conditions.
Teaching the Mantle Plumes Debate
NASA Astrophysics Data System (ADS)
Foulger, G. R.
2010-12-01
There is an ongoing debate regarding whether or not mantle plumes exist. This debate has highlighted a number of issues regarding how Earth science is currently practised, and how this feeds into approaches toward teaching students. The plume model is an hypothesis, not a proven fact. And yet many researchers assume a priori that plumes exist. This assumption feeds into teaching. That the plume model is unproven, and that many practising researchers are skeptical, may be at best only mentioned in passing to students, with most teachers assuming that plumes are proven to exist. There is typically little emphasis, in particular in undergraduate teaching, that the origin of melting anomalies is currently uncertain and that scientists do not know all the answers. Little encouragement is given to students to become involved in the debate and to consider the pros and cons for themselves. Typically teachers take the approach that “an answer” (or even “the answer”) must be taught to students. Such a pedagogic approach misses an excellent opportunity to allow students to participate in an important ongoing debate in Earth sciences. It also misses the opportunity to illustrate to students several critical aspects regarding correct application of the scientific method. The scientific method involves attempting to disprove hypotheses, not to prove them. A priori assumptions should be kept uppermost in mind and reconsidered at all stages. Multiple working hypotheses should be entertained. The predictions of a hypothesis should be tested, and unpredicted observations taken as weakening the original hypothesis. Hypotheses should not be endlessly adapted to fit unexpected observations. The difficulty with pedagogic treatment of the mantle plumes debate highlights a general uncertainty about how to teach issues in Earth science that are not yet resolved with certainty. It also represents a missed opportunity to let students experience how scientific theories evolve, warts and all. Working with students to enable them to participate in the evolution of the subject and to share in the excitement of major developments is surely the best way to attract them to science.
First-principles calculations of lattice dynamics and thermal properties of polar solids
Wang, Yi; Shang, Shun -Li; Fang, Huazhi; ...
2016-05-13
Although the theory of lattice dynamics was established six decades ago, its accurate implementation for polar solids using the direct (or supercell, small displacement, frozen phonon) approach within the framework of density-function-theory-based first-principles calculations had been a challenge until recently. It arises from the fact that the vibration-induced polarization breaks the lattice periodicity, whereas periodic boundary conditions are required by typical first-principles calculations, leading to an artificial macroscopic electric field. In conclusion, the article reviews a mixed-space approach to treating the interactions between lattice vibration and polarization, its applications to accurately predicting the phonon and associated thermal properties, and itsmore » implementations in a number of existing phonon codes.« less
An analytic approach to cyber adversarial dynamics
NASA Astrophysics Data System (ADS)
Sweeney, Patrick; Cybenko, George
2012-06-01
To date, cyber security investment by both the government and commercial sectors has been largely driven by the myopic best response of players to the actions of their adversaries and their perception of the adversarial environment. However, current work in applying traditional game theory to cyber operations typically assumes that games exist with prescribed moves, strategies, and payos. This paper presents an analytic approach to characterizing the more realistic cyber adversarial metagame that we believe is being played. Examples show that understanding the dynamic metagame provides opportunities to exploit an adversary's anticipated attack strategy. A dynamic version of a graph-based attack-defend game is introduced, and a simulation shows how an optimal strategy can be selected for success in the dynamic environment.
Schaefbauer, Chris L; Campbell, Terrance R; Senteio, Charles; Siek, Katie A; Bakken, Suzanne; Veinot, Tiffany C
2016-01-01
Objective We compare 5 health informatics research projects that applied community-based participatory research (CBPR) approaches with the goal of extending existing CBPR principles to address issues specific to health informatics research. Materials and methods We conducted a cross-case analysis of 5 diverse case studies with 1 common element: integration of CBPR approaches into health informatics research. After reviewing publications and other case-related materials, all coauthors engaged in collaborative discussions focused on CBPR. Researchers mapped each case to an existing CBPR framework, examined each case individually for success factors and barriers, and identified common patterns across cases. Results Benefits of applying CBPR approaches to health informatics research across the cases included the following: developing more relevant research with wider impact, greater engagement with diverse populations, improved internal validity, more rapid translation of research into action, and the development of people. Challenges of applying CBPR to health informatics research included requirements to develop strong, sustainable academic-community partnerships and mismatches related to cultural and temporal factors. Several technology-related challenges, including needs to define ownership of technology outputs and to build technical capacity with community partners, also emerged from our analysis. Finally, we created several principles that extended an existing CBPR framework to specifically address health informatics research requirements. Conclusions Our cross-case analysis yielded valuable insights regarding CBPR implementation in health informatics research and identified valuable lessons useful for future CBPR-based research. The benefits of applying CBPR approaches can be significant, particularly in engaging populations that are typically underserved by health care and in designing patient-facing technology. PMID:26228766
Detail view illustrating existing (typical) sidewalks and street trees within ...
Detail view illustrating existing (typical) sidewalks and street trees within the Vale Historic District - Vale Commercial Historic District, A Street between Holland & Longfellow Streets, north side of B Street between Holland & Main Streets, Main Street South from A Street through B Street, & Stone House at 283 Main Street South, Vale, Malheur County, OR
Heterogeneous patterns of brain atrophy in Alzheimer's disease.
Poulakis, Konstantinos; Pereira, Joana B; Mecocci, Patrizia; Vellas, Bruno; Tsolaki, Magda; Kłoszewska, Iwona; Soininen, Hilkka; Lovestone, Simon; Simmons, Andrew; Wahlund, Lars-Olof; Westman, Eric
2018-05-01
There is increasing evidence showing that brain atrophy varies between patients with Alzheimer's disease (AD), suggesting that different anatomical patterns might exist within the same disorder. We investigated AD heterogeneity based on cortical and subcortical atrophy patterns in 299 AD subjects from 2 multicenter cohorts. Clusters of patients and important discriminative features were determined using random forest pairwise similarity, multidimensional scaling, and distance-based hierarchical clustering. We discovered 2 typical (72.2%) and 3 atypical (28.8%) subtypes with significantly different demographic, clinical, and cognitive characteristics, and different rates of cognitive decline. In contrast to previous studies, our unsupervised random forest approach based on cortical and subcortical volume measures and their linear and nonlinear interactions revealed more typical AD subtypes with important anatomically discriminative features, while the prevalence of atypical cases was lower. The hippocampal-sparing and typical AD subtypes exhibited worse clinical progression in visuospatial, memory, and executive cognitive functions. Our findings suggest there is substantial heterogeneity in AD that has an impact on how patients function and progress over time. Copyright © 2018 Elsevier Inc. All rights reserved.
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.; Alis, C.
2016-06-01
In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Neuroscience and Global Learning
Ruscio, Michael G.; Korey, Chris; Birck, Anette
2015-01-01
Traditional study abroad experiences take a variety of forms with most incorporating extensive cultural emersion and a focus on global learning skills. Here we ask the question: Can this type of experience co-exist with a quality scientific experience and continued progression through a typically rigorous undergraduate neuroscience curriculum? What are the potential costs and benefits of this approach? How do we increase student awareness of study abroad opportunities and inspire them to participate? We outline programs that have done this with some success and point out ways to cultivate this approach for future programs. These programs represent a variety of approaches in both their duration and role in a given curriculum. We discuss a one-week first year seminar program in Berlin, a summer study abroad course in Munich and Berlin, semester experiences and other options offered through the Danish Institute for Study Abroad in Copenhagen. Each of these experiences offers opportunities for interfacing global learning with neuroscience. PMID:26240528
Direct Maximization of Protein Identifications from Tandem Mass Spectra*
Spivak, Marina; Weston, Jason; Tomazela, Daniela; MacCoss, Michael J.; Noble, William Stafford
2012-01-01
The goal of many shotgun proteomics experiments is to determine the protein complement of a complex biological mixture. For many mixtures, most methodological approaches fall significantly short of this goal. Existing solutions to this problem typically subdivide the task into two stages: first identifying a collection of peptides with a low false discovery rate and then inferring from the peptides a corresponding set of proteins. In contrast, we formulate the protein identification problem as a single optimization problem, which we solve using machine learning methods. This approach is motivated by the observation that the peptide and protein level tasks are cooperative, and the solution to each can be improved by using information about the solution to the other. The resulting algorithm directly controls the relevant error rate, can incorporate a wide variety of evidence and, for complex samples, provides 18–34% more protein identifications than the current state of the art approaches. PMID:22052992
Integration of prior knowledge into dense image matching for video surveillance
NASA Astrophysics Data System (ADS)
Menze, M.; Heipke, C.
2014-08-01
Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.
High-resolution structure of viruses from random diffraction snapshots
Hosseinizadeh, A.; Schwander, P.; Dashti, A.; Fung, R.; D'Souza, R. M.; Ourmazd, A.
2014-01-01
The advent of the X-ray free-electron laser (XFEL) has made it possible to record diffraction snapshots of biological entities injected into the X-ray beam before the onset of radiation damage. Algorithmic means must then be used to determine the snapshot orientations and thence the three-dimensional structure of the object. Existing Bayesian approaches are limited in reconstruction resolution typically to 1/10 of the object diameter, with the computational expense increasing as the eighth power of the ratio of diameter to resolution. We present an approach capable of exploiting object symmetries to recover three-dimensional structure to high resolution, and thus reconstruct the structure of the satellite tobacco necrosis virus to atomic level. Our approach offers the highest reconstruction resolution for XFEL snapshots to date and provides a potentially powerful alternative route for analysis of data from crystalline and nano-crystalline objects. PMID:24914154
High-resolution structure of viruses from random diffraction snapshots.
Hosseinizadeh, A; Schwander, P; Dashti, A; Fung, R; D'Souza, R M; Ourmazd, A
2014-07-17
The advent of the X-ray free-electron laser (XFEL) has made it possible to record diffraction snapshots of biological entities injected into the X-ray beam before the onset of radiation damage. Algorithmic means must then be used to determine the snapshot orientations and thence the three-dimensional structure of the object. Existing Bayesian approaches are limited in reconstruction resolution typically to 1/10 of the object diameter, with the computational expense increasing as the eighth power of the ratio of diameter to resolution. We present an approach capable of exploiting object symmetries to recover three-dimensional structure to high resolution, and thus reconstruct the structure of the satellite tobacco necrosis virus to atomic level. Our approach offers the highest reconstruction resolution for XFEL snapshots to date and provides a potentially powerful alternative route for analysis of data from crystalline and nano-crystalline objects.
Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J
2012-01-01
A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617
Blind channel estimation and deconvolution in colored noise using higher-order cumulants
NASA Astrophysics Data System (ADS)
Tugnait, Jitendra K.; Gummadavelli, Uma
1994-10-01
Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.
Data Fusion Based on Optical Technology for Observation of Human Manipulation
NASA Astrophysics Data System (ADS)
Falco, Pietro; De Maria, Giuseppe; Natale, Ciro; Pirozzi, Salvatore
2012-01-01
The adoption of human observation is becoming more and more frequent within imitation learning and programming by demonstration approaches (PbD) to robot programming. For robotic systems equipped with anthropomorphic hands, the observation phase is very challenging and no ultimate solution exists. This work proposes a novel mechatronic approach to the observation of human hand motion during manipulation tasks. The strategy is based on the combined use of an optical motion capture system and a low-cost data glove equipped with novel joint angle sensors, based on optoelectronic technology. The combination of the two information sources is obtained through a sensor fusion algorithm based on the extended Kalman filter (EKF) suitably modified to tackle the problem of marker occlusions, typical of optical motion capture systems. This approach requires a kinematic model of the human hand. Another key contribution of this work is a new method to calibrate this model.
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
He, Jianjun; Gu, Hong; Liu, Wenqi
2012-01-01
It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.
CaveCAD: a tool for architectural design in immersive virtual environments
NASA Astrophysics Data System (ADS)
Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo
2014-02-01
Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.
Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca
2016-01-01
Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency of positive deviance must be assessed and compared with other quality improvement approaches. PROSPERO registration number CRD42014009365. PMID:26590198
Incorporating climate change into systematic conservation planning
Groves, Craig R.; Game, Edward T.; Anderson, Mark G.; Cross, Molly; Enquist, Carolyn; Ferdana, Zach; Girvetz, Evan; Gondor, Anne; Hall, Kimberly R.; Higgins, Jonathan; Marshall, Rob; Popper, Ken; Schill, Steve; Shafer, Sarah L.
2012-01-01
The principles of systematic conservation planning are now widely used by governments and non-government organizations alike to develop biodiversity conservation plans for countries, states, regions, and ecoregions. Many of the species and ecosystems these plans were designed to conserve are now being affected by climate change, and there is a critical need to incorporate new and complementary approaches into these plans that will aid species and ecosystems in adjusting to potential climate change impacts. We propose five approaches to climate change adaptation that can be integrated into existing or new biodiversity conservation plans: (1) conserving the geophysical stage, (2) protecting climatic refugia, (3) enhancing regional connectivity, (4) sustaining ecosystem process and function, and (5) capitalizing on opportunities emerging in response to climate change. We discuss both key assumptions behind each approach and the trade-offs involved in using the approach for conservation planning. We also summarize additional data beyond those typically used in systematic conservation plans required to implement these approaches. A major strength of these approaches is that they are largely robust to the uncertainty in how climate impacts may manifest in any given region.
Using a Personal Device to Strengthen Password Authentication from an Untrusted Computer
NASA Astrophysics Data System (ADS)
Mannan, Mohammad; van Oorschot, P. C.
Keylogging and phishing attacks can extract user identity and sensitive account information for unauthorized access to users' financial accounts. Most existing or proposed solutions are vulnerable to session hijacking attacks. We propose a simple approach to counter these attacks, which cryptographically separates a user's long-term secret input from (typically untrusted) client PCs; a client PC performs most computations but has access only to temporary secrets. The user's long-term secret (typically short and low-entropy) is input through an independent personal trusted device such as a cellphone. The personal device provides a user's long-term secrets to a client PC only after encrypting the secrets using a pre-installed, "correct" public key of a remote service (the intended recipient of the secrets). The proposed protocol (
Satterthwaite, Theodore D.; Elliott, Mark A.; Gerraty, Raphael T.; Ruparel, Kosha; Loughead, James; Calkins, Monica E.; Eickhoff, Simon B.; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.; Wolf, Daniel H.
2013-01-01
Several recent reports in large, independent samples have demonstrated the influence of motion artifact on resting-state functional connectivity MRI (rsfc-MRI). Standard rsfc-MRI preprocessing typically includes regression of confounding signals and band-pass filtering. However, substantial heterogeneity exists in how these techniques are implemented across studies, and no prior study has examined the effect of differing approaches for the control of motion-induced artifacts. To better understand how in-scanner head motion affects rsfc-MRI data, we describe the spatial, temporal, and spectral characteristics of motion artifacts in a sample of 348 adolescents. Analyses utilize a novel approach for describing head motion on a voxelwise basis. Next, we systematically evaluate the efficacy of a range of confound regression and filtering techniques for the control of motion-induced artifacts. Results reveal that the effectiveness of preprocessing procedures on the control of motion is heterogeneous, and that improved preprocessing provides a substantial benefit beyond typical procedures. These results demonstrate that the effect of motion on rsfc-MRI can be substantially attenuated through improved preprocessing procedures, but not completely removed. PMID:22926292
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
Fais, Stefano; Venturi, Giulietta; Gatenby, Bob
2014-12-01
Much effort is currently devoted to developing patient-specific cancer therapy based on molecular characterization of tumors. In particular, this approach seeks to identify driver mutations that can be blocked through small molecular inhibitors. However, this approach is limited by extensive intratumoral genetic heterogeneity, and, not surprisingly, even dramatic initial responses are typically of limited duration as resistant tumor clones rapidly emerge and proliferate. We propose an alternative approach based on observations that while tumor evolution produces genetic divergence, it is also associated with striking phenotypic convergence that loosely correspond to the well-known cancer "hallmarks". These convergent properties can be described as driver phenotypes and may be more consistently and robustly expressed than genetic targets. To this purpose, it is necessary to identify strategies that are critical for cancer progression and metastases, and it is likely that these driver phenotypes will be closely related to cancer "hallmarks". It appears that an antiacidic approach, by targetting a driver phenotype in tumors, may be thought as a future strategy against tumors in either preventing the occurrence of cancer or treating tumor patients with multiple aims, including the improvement of efficacy of existing therapies, possibly reducing their systemic side effects, and controlling tumor growth, progression, and metastasis. This may be achieved with existing molecules such as proton pump inhibitors (PPIs) and buffers such as sodium bicarbonate, citrate, or TRIS.
Delamination detection using methods of computational intelligence
NASA Astrophysics Data System (ADS)
Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata
2012-11-01
Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
On the interaction of Tollmien-Schlichting waves in axisymmetric supersonic flows
NASA Technical Reports Server (NTRS)
Duck, P. W.; Hall, P.
1988-01-01
Two-dimensional lower branch Tollmien-Schlichting waves described by triple-deck theory are always stable for planar supersonic flows. The possible occurrence of axisymmetric unstable modes in the supersonic flow around an axisymmetric body is investigated. In particular flows around bodies with typical radii comparable with the thickness of the upper deck are considered. It is shown that such unstable modes exist below a critical nondimensional radius of the body a sub 0. At values of the radius above a sub 0 all the modes are stable while if unstable modes exist they are found to occur in pairs. The interaction of these modes in the nonlinear regime is investigated using a weakly nonlinear approach and it is found that, dependent on the frequencies of the imposed Tollmien-Schlichting waves, either of the modes can be set up.
On the interaction of Tollmien-Schlichting waves in axisymmetric supersonic flows
NASA Technical Reports Server (NTRS)
Duck, P. W.; Hall, P.
1989-01-01
Two-dimensional lower branch Tollmien-Schlichting waves described by triple-deck theory are always stable for planar supersonic flows. The possible occurrence of axisymmetric unstable modes in the supersonic flow around an axisymmetric body is investigated. In particular flows around bodies with typical radii comparable with the thickness of the upper deck are considered. It is shown that such unstable modes exist below a critical nondimensional radius of the body a sub O. At values of the radius above a sub O all the modes are stable while if unstable modes exist they are found to occur in pairs. The interaction of these modes in the nonlinear regime is investigated using a weakly nonlinear approach and it is found that, dependent on the frequencies of the imposed Tollmien-Schlichting waves, either of the modes can be set up.
Image-guided filtering for improving photoacoustic tomographic image reconstruction.
Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K
2018-06-01
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Considering the Lives of Microbes in Microbial Communities.
Shank, Elizabeth A
2018-01-01
Over the last decades, sequencing technologies have transformed our ability to investigate the composition and functional capacity of microbial communities. Even so, critical questions remain about these complex systems that cannot be addressed by the bulk, community-averaged data typically provided by sequencing methods. In this Perspective, I propose that future advances in microbiome research will emerge from considering "the lives of microbes": we need to create methods to explicitly interrogate how microbes exist and interact in native-setting-like microenvironments. This approach includes developing approaches that expose the phenotypic heterogeneity of microbes; exploring the effects of coculture cues on cellular differentiation and metabolite production; and designing visualization systems that capture features of native microbial environments while permitting the nondestructive observation of microbial interactions over space and time with single-cell resolution.
Improvements in current treatments and emerging therapies for adult obstructive sleep apnea
2014-01-01
Obstructive sleep apnea (OSA) is common and is associated with a number of adverse outcomes, including an increased risk for cardiovascular disease. Typical treatment approaches, including positive airway pressure, oral appliances, various upper airway surgeries, and/or weight loss, can improve symptoms and reduce the severity of disease in select patient groups. However, these approaches have several potential limitations, including suboptimal adherence, lack of suitability for all patient groups, and/or absence of adequate outcomes data. Emerging potential therapeutic options, including nasal expiratory positive airway pressure (PAP), oral negative pressure, upper airway muscle stimulation, and bariatric surgery, as well as improvements in existing treatments and the utilization of improving technologies are moving the field forward and should offer effective therapies to a wider group of patients with OSA. PMID:24860658
Choice Rules and Accumulator Networks
2015-01-01
This article presents a preference accumulation model that can be used to implement a number of different multi-attribute heuristic choice rules, including the lexicographic rule, the majority of confirming dimensions (tallying) rule and the equal weights rule. The proposed model differs from existing accumulators in terms of attribute representation: Leakage and competition, typically applied only to preference accumulation, are also assumed to be involved in processing attribute values. This allows the model to perform a range of sophisticated attribute-wise comparisons, including comparisons that compute relative rank. The ability of a preference accumulation model composed of leaky competitive networks to mimic symbolic models of heuristic choice suggests that these 2 approaches are not incompatible, and that a unitary cognitive model of preferential choice, based on insights from both these approaches, may be feasible. PMID:28670592
Dataflow computing approach in high-speed digital simulation
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Karplus, W. J.
1984-01-01
New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.
Kellie, John F; Kehler, Jonathan R; Karlinsey, Molly Z; Summerfield, Scott G
2017-12-01
Typically, quantitation of biotherapeutics from biological matrices by LC-MS is based on a surrogate peptide approach to determine molecule concentration. Recent efforts have focused on quantitation of the intact protein molecules or larger mass subunits of monoclonal antibodies. To date, there has been limited guidance for large or intact protein mass quantitation for quantitative bioanalysis. Intact- and subunit-level analyses of biotherapeutics from biological matrices are performed at 12-25 kDa mass range with quantitation data presented. Linearity, bias and other metrics are presented along with recommendations made on the viability of existing quantitation approaches. This communication is intended to start a discussion around intact protein data analysis and processing, recognizing that other published contributions will be required.
Phenomenological approach to mechanical damage growth analysis.
Pugno, Nicola; Bosia, Federico; Gliozzi, Antonio S; Delsanto, Pier Paolo; Carpinteri, Alberto
2008-10-01
The problem of characterizing damage evolution in a generic material is addressed with the aim of tracing it back to existing growth models in other fields of research. Based on energetic considerations, a system evolution equation is derived for a generic damage indicator describing a material system subjected to an increasing external stress. The latter is found to fit into the framework of a recently developed phenomenological universality (PUN) approach and, more specifically, the so-called U2 class. Analytical results are confirmed by numerical simulations based on a fiber-bundle model and statistically assigned local strengths at the microscale. The fits with numerical data prove, with an excellent degree of reliability, that the typical evolution of the damage indicator belongs to the aforementioned PUN class. Applications of this result are briefly discussed and suggested.
[Psychiatric emergencies in the elderly].
Zinetti, Jacqueline; Daraux, Jacques; Ploskas, Fabienne
2003-06-01
If it is common place to claim that old age is not a disease, however the losses and mournings related to that stage of life generate the production of acute psychopathological disorders. A few of them will be approached and treated as emergencies and required from the physician a good command of their existence and of specificities: atypical depressive disorders stressed by the risk of suicide, sits of delusions some themes of which are typical of senesence, excitement and aggressiveness inherent to dementia. As to the ill-treatment of the elderly often under-diagnosed, it requires urgent interventions both clinical and even juridical.
RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems
Rodriguez Arreola, Alberto; Balsamo, Domenico
2018-01-01
Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor’s state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I2C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches. PMID:29320441
A mass spectrometry proteomics data management platform.
Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael
2012-09-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.
Meacham, J. Mark; Durvasula, Kiranmai; Degertekin, F. Levent; Fedorov, Andrei G.
2015-01-01
Effective intracellular delivery is a significant impediment to research and therapeutic applications at all processing scales. Physical delivery methods have long demonstrated the ability to deliver cargo molecules directly to the cytoplasm or nucleus, and the mechanisms underlying the most common approaches (microinjection, electroporation, and sonoporation) have been extensively investigated. In this review, we discuss established approaches, as well as emerging techniques (magnetofection, optoinjection, and combined modalities). In addition to operating principles and implementation strategies, we address applicability and limitations of various in vitro, ex vivo, and in vivo platforms. Importantly, we perform critical assessments regarding (1) treatment efficacy with diverse cell types and delivered cargo molecules, (2) suitability to different processing scales (from single cell to large populations), (3) suitability for automation/integration with existing workflows, and (4) multiplexing potential and flexibility/adaptability to enable rapid changeover between treatments of varied cell types. Existing techniques typically fall short in one or more of these criteria; however, introduction of micro-/nanotechnology concepts, as well as synergistic coupling of complementary method(s), can improve performance and applicability of a particular approach, overcoming barriers to practical implementation. For this reason, we emphasize these strategies in examining recent advances in development of delivery systems. PMID:23813915
Reactive navigation in extremely dense and highly intricate environments
2017-01-01
Reactive navigation is a well-known paradigm for controlling an autonomous mobile robot, which suggests making all control decisions through some light processing of the current/recent sensor data. Among the many advantages of this paradigm are: 1) the possibility to apply it to robots with limited and low-priced hardware resources, and 2) the fact of being able to safely navigate a robot in completely unknown environments containing unpredictable moving obstacles. As a major disadvantage, nevertheless, the reactive paradigm may occasionally cause robots to get trapped in certain areas of the environment—typically, these conflicting areas have a large concave shape and/or are full of closely-spaced obstacles. In this last respect, an enormous effort has been devoted to overcome such a serious drawback during the last two decades. As a result of this effort, a substantial number of new approaches for reactive navigation have been put forward. Some of these approaches have clearly improved the way how a reactively-controlled robot can move among densely cluttered obstacles; some other approaches have essentially focused on increasing the variety of obstacle shapes and sizes that could be successfully circumnavigated; etc. In this paper, as a starting point, we choose the best existing reactive approach to move in densely cluttered environments, and we also choose the existing reactive approach with the greatest ability to circumvent large intricate-shaped obstacles. Then, we combine these two approaches in a way that makes the most of them. From the experimental point of view, we use both simulated and real scenarios of challenging complexity for testing purposes. In such scenarios, we demonstrate that the combined approach herein proposed clearly outperforms the two individual approaches on which it is built. PMID:29287078
Spectra of conditionalization and typicality in the multiverse
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2016-02-01
An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.
RoboPIV: how robotics enable PIV on a large industrial scale
NASA Astrophysics Data System (ADS)
Michaux, F.; Mattern, P.; Kallweit, S.
2018-07-01
This work demonstrates how the interaction between particle image velocimetry (PIV) and robotics can massively increase measurement efficiency. The interdisciplinary approach is shown using the complex example of an automated, large scale, industrial environment: a typical automotive wind tunnel application. Both the high degree of flexibility in choosing the measurement region and the complete automation of stereo PIV measurements are presented. The setup consists of a combination of three robots, individually used as a 6D traversing unit for the laser illumination system as well as for each of the two cameras. Synchronised movements in the same reference frame are realised through a master-slave setup with a single interface to the user. By integrating the interface into the standard wind tunnel management system, a single measurement plane or a predefined sequence of several planes can be requested through a single trigger event, providing the resulting vector fields within minutes. In this paper, a brief overview on the demands of large scale industrial PIV and the existing solutions is given. Afterwards, the concept of RoboPIV is introduced as a new approach. In a first step, the usability of a selection of commercially available robot arms is analysed. The challenges of pose uncertainty and importance of absolute accuracy are demonstrated through comparative measurements, explaining the individual pros and cons of the analysed systems. Subsequently, the advantage of integrating RoboPIV directly into the existing wind tunnel management system is shown on basis of a typical measurement sequence. In a final step, a practical measurement procedure, including post-processing, is given by using real data and results. Ultimately, the benefits of high automation are demonstrated, leading to a drastic reduction in necessary measurement time compared to non-automated systems, thus massively increasing the efficiency of PIV measurements.
User interfaces in space science instrumentation
NASA Astrophysics Data System (ADS)
McCalden, Alec John
This thesis examines user interaction with instrumentation in the specific context of space science. It gathers together existing practice in machine interfaces with a look at potential future usage and recommends a new approach to space science projects with the intention of maximising their science return. It first takes a historical perspective on user interfaces and ways of defining and measuring the science return of a space instrument. Choices of research methodology are considered. Implementation details such as the concepts of usability, mental models, affordance and presentation of information are described, and examples of existing interfaces in space science are given. A set of parameters for use in analysing and synthesizing a user interface is derived by using a set of case studies of diverse failures and from previous work. A general space science user analysis is made by looking at typical practice, and an interview plus persona technique is used to group users with interface designs. An examination is made of designs in the field of astronomical instrumentation interfaces, showing the evolution of current concepts and including ideas capable of sustaining progress in the future. The parameters developed earlier are then tested against several established interfaces in the space science context to give a degree of confidence in their use. The concept of a simulator that is used to guide the development of an instrument over the whole lifecycle is described, and the idea is proposed that better instrumentation would result from more efficient use of the resources available. The previous ideas in this thesis are then brought together to describe a proposed new approach to a typical development programme, with an emphasis on user interaction. The conclusion shows that there is significant room for improvement in the science return from space instrumentation by attention to the user interface.
Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2007-12-01
The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.
Principles for urban stormwater management to protect stream ecosystems
Walsh, Christopher J.; Booth, Derek B.; Burns, Matthew J.; Fletcher, Tim D.; Hale, Rebecca L.; Hoang, Lan N.; Livingston, Grant; Rippy, Megan A.; Roy, Allison; Scoggins, Mateo; Wallace, Angela
2016-01-01
Urban stormwater runoff is a critical source of degradation to stream ecosystems globally. Despite broad appreciation by stream ecologists of negative effects of stormwater runoff, stormwater management objectives still typically center on flood and pollution mitigation without an explicit focus on altered hydrology. Resulting management approaches are unlikely to protect the ecological structure and function of streams adequately. We present critical elements of stormwater management necessary for protecting stream ecosystems through 5 principles intended to be broadly applicable to all urban landscapes that drain to a receiving stream: 1) the ecosystems to be protected and a target ecological state should be explicitly identified; 2) the postdevelopment balance of evapotranspiration, stream flow, and infiltration should mimic the predevelopment balance, which typically requires keeping significant runoff volume from reaching the stream; 3) stormwater control measures (SCMs) should deliver flow regimes that mimic the predevelopment regime in quality and quantity; 4) SCMs should have capacity to store rain events for all storms that would not have produced widespread surface runoff in a predevelopment state, thereby avoiding increased frequency of disturbance to biota; and 5) SCMs should be applied to all impervious surfaces in the catchment of the target stream. These principles present a range of technical and social challenges. Existing infrastructural, institutional, or governance contexts often prevent application of the principles to the degree necessary to achieve effective protection or restoration, but significant potential exists for multiple co-benefits from SCM technologies (e.g., water supply and climate-change adaptation) that may remove barriers to implementation. Our set of ideal principles for stream protection is intended as a guide for innovators who seek to develop new approaches to stormwater management rather than accept seemingly insurmountable historical constraints, which guarantee future, ongoing degradation.
Hassanpour, Saeed; O'Connor, Martin J; Das, Amar K
2013-08-12
A variety of informatics approaches have been developed that use information retrieval, NLP and text-mining techniques to identify biomedical concepts and relations within scientific publications or their sentences. These approaches have not typically addressed the challenge of extracting more complex knowledge such as biomedical definitions. In our efforts to facilitate knowledge acquisition of rule-based definitions of autism phenotypes, we have developed a novel semantic-based text-mining approach that can automatically identify such definitions within text. Using an existing knowledge base of 156 autism phenotype definitions and an annotated corpus of 26 source articles containing such definitions, we evaluated and compared the average rank of correctly identified rule definition or corresponding rule template using both our semantic-based approach and a standard term-based approach. We examined three separate scenarios: (1) the snippet of text contained a definition already in the knowledge base; (2) the snippet contained an alternative definition for a concept in the knowledge base; and (3) the snippet contained a definition not in the knowledge base. Our semantic-based approach had a higher average rank than the term-based approach for each of the three scenarios (scenario 1: 3.8 vs. 5.0; scenario 2: 2.8 vs. 4.9; and scenario 3: 4.5 vs. 6.2), with each comparison significant at the p-value of 0.05 using the Wilcoxon signed-rank test. Our work shows that leveraging existing domain knowledge in the information extraction of biomedical definitions significantly improves the correct identification of such knowledge within sentences. Our method can thus help researchers rapidly acquire knowledge about biomedical definitions that are specified and evolving within an ever-growing corpus of scientific publications.
Alkali-Activated Geopolymers: A Literature Review
2010-07-01
Typical Fly Ash Composition (%) [32] .................................................................................... 15 3. Typical Geopolymer ... geopolymer matrix. Fly ash composition is generally measured and evaluated by the percentage of existing elements present in the by-product...pozzolan bear great significance to the strength potential of a given geopolymer specimen. Table 2. Typical Fly Ash Composition (%) [32
A seesaw-type approach for enhancing nonlinear energy harvesting
NASA Astrophysics Data System (ADS)
Deng, Huaxia; Wang, Zhemin; Du, Yu; Zhang, Jin; Ma, Mengchao; Zhong, Xiang
2018-05-01
Harvesting sustainable mechanical energy is the ultimate objective of nonlinear energy harvesters. However, overcoming potential barriers, especially without the use of extra excitations, poses a great challenge for the development of nonlinear generators. In contrast to the existing methods, which typically modify the barrier height or utilize additional excitations, this letter proposes a seesaw-type approach to facilitate escape from potential wells by transfer of internal energy, even under low-intensity excitation. This approach is adopted in the design of a seesaw-type nonlinear piezoelectric energy harvester and the energy transfer process is analyzed by deriving expressions for the energy to reveal the working mechanism. Comparison experiments demonstrate that this approach improves energy harvesting in terms of an increase in the working frequency bandwidth by a factor of 60.14 and an increase in the maximum output voltage by a factor of 5.1. Moreover, the output power is increased by a factor of 51.3, which indicates that this approach significantly improves energy collection efficiency. This seesaw-type approach provides a welcome boost to the development of renewable energy collection methods by improving the efficiency of harvesting of low-intensity ambient mechanical energy.
Computing the multifractal spectrum from time series: an algorithmic approach.
Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E
2009-12-01
We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.
Differences among Major Taxa in the Extent of Ecological Knowledge across Four Major Ecosystems
Fisher, Rebecca; Knowlton, Nancy; Brainard, Russell E.; Caley, M. Julian
2011-01-01
Existing knowledge shapes our understanding of ecosystems and is critical for ecosystem-based management of the world's natural resources. Typically this knowledge is biased among taxa, with some taxa far better studied than others, but the extent of this bias is poorly known. In conjunction with the publically available World Registry of Marine Species database (WoRMS) and one of the world's premier electronic scientific literature databases (Web of Science®), a text mining approach is used to examine the distribution of existing ecological knowledge among taxa in coral reef, mangrove, seagrass and kelp bed ecosystems. We found that for each of these ecosystems, most research has been limited to a few groups of organisms. While this bias clearly reflects the perceived importance of some taxa as commercially or ecologically valuable, the relative lack of research of other taxonomic groups highlights the problem that some key taxa and associated ecosystem processes they affect may be poorly understood or completely ignored. The approach outlined here could be applied to any type of ecosystem for analyzing previous research effort and identifying knowledge gaps in order to improve ecosystem-based conservation and management. PMID:22073172
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
Enhanced nitrogen removal in trickling filter plants.
Dai, Y; Constantinou, A; Griffiths, P
2013-01-01
The Beaudesert Sewage Treatment Plant (STP), originally built in 1966 and augmented in 1977, is a typical biological trickling filter (TF) STP comprising primary sedimentation tanks (PSTs), TFs and humus tanks. The plant, despite not originally being designed for nitrogen removal, has been consistently achieving over 60% total nitrogen reduction and low effluent ammonium concentration of less than 5 mg NH3-N/L. Through the return of a NO3(-)-rich stream from the humus tanks to the PSTs and maintaining an adequate sludge age within the PSTs, the current plant is achieving a substantial degree of denitrification. Further enhanced denitrification has been achieved by raising the recycle flows and maintaining an adequate solids retention time (SRT) within the PSTs. This paper describes the approach to operating a TF plant to achieve a high degree of nitrification and denitrification. The effectiveness of this approach is demonstrated through the pilot plant trial. The results from the pilot trial demonstrate a significant improvement in nitrogen removal performance whilst maximising the asset life of the existing infrastructure. This shows great potential as a retrofit option for small and rural communities with pre-existing TFs that require improvements in terms of nitrogen removal.
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-08-01
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
ΛCDM model with dissipative nonextensive viscous dark matter
NASA Astrophysics Data System (ADS)
Gimenes, H. S.; Viswanathan, G. M.; Silva, R.
2018-03-01
Many models in cosmology typically assume the standard bulk viscosity. We study an alternative interpretation for the origin of the bulk viscosity. Using nonadditive statistics proposed by Tsallis, we propose a bulk viscosity component that can only exist by a nonextensive effect through the nonextensive/dissipative correspondence (NexDC). In this paper, we consider a ΛCDM model for a flat universe with a dissipative nonextensive viscous dark matter component, following the Eckart theory of bulk viscosity, without any perturbative approach. In order to analyze cosmological constraints, we use one of the most recent observations of Type Ia Supernova, baryon acoustic oscillations and cosmic microwave background data.
Blignaut, P J; McDonald, T; Tolmie, C J
2001-05-01
A prototyping approach was used to determine the essential system requirements of a computerised patient record information system for a typical township primary health care clinic. A pilot clinic was identified and the existing manual system and business processes in this clinic was studied intensively before the first prototype was implemented. Interviews with users, incidental observations and analysis of actual data entered were used as primary techniques to refine the prototype system iteratively until a system with an acceptable data set and adequate functionalities were in place. Several non-functional and user-related requirements were also discovered during the prototyping period.
Oil, gas field growth projections: Wishful thinking or reality?
Attanasi, E.D.; Mast, R.F.; Root, D.H.
1999-01-01
The observed `field growth' for the period from 1992 through 1996 with the US Geological Survey's (USGS) predicted field growth for the same period are compared. Known field recovery of field size is defined as the sum of past cumulative field production and the field's proved reserves. Proved reserves are estimated quantities of hydrocarbons which geologic and engineering data demonstrate with reasonable certainty to recoverable from known fields under existing economic and operating conditions. Proved reserve estimates calculated with this definition are typically conservative. The modeling approach used by the USGS to characterize `field growth phenomena' is statistical rather that geologic in nature.
A new seamless, smooth, interior, absorptive finishing system
NASA Astrophysics Data System (ADS)
D'Antonio, Peter
2003-10-01
Government architecture typically employs classic forms of vaults, domes and other focusing or reflective shapes, usually created with hard materials like concrete and plaster. The use of conventional porous absorption is typically rejected as an acoustical surface material for aesthetic reasons. Hence, many of these new and existing facilities have compromised speech intelligibility and music quality. Acousticians have sought a field-applied, absorptive finishing system that resembles a smooth plaster or painted drywall surface, since the dawn of architectural acoustics. Some success has been achieved using sprayed cellulose or cementitious materials, but surface smoothness has been a challenge. A new approach utilizing a thin microporous layer of mineral particles applied over a mineral wool panel will be described. This material can be applied to almost any shape surface, internally pigmented to match almost any color and renovated. Because of these unique characteristics the new seamless, absorptive, finishing system is being specified for many new and renovated spaces. Application examples will be presented.
Automated discovery and construction of surface phase diagrams using machine learning
Ulissi, Zachary W.; Singh, Aayush R.; Tsai, Charlie; ...
2016-08-24
Surface phase diagrams are necessary for understanding surface chemistry in electrochemical catalysis, where a range of adsorbates and coverages exist at varying applied potentials. These diagrams are typically constructed using intuition, which risks missing complex coverages and configurations at potentials of interest. More accurate cluster expansion methods are often difficult to implement quickly for new surfaces. We adopt a machine learning approach to rectify both issues. Using a Gaussian process regression model, the free energy of all possible adsorbate coverages for surfaces is predicted for a finite number of adsorption sites. Our result demonstrates a rational, simple, and systematic approachmore » for generating accurate free-energy diagrams with reduced computational resources. Finally, the Pourbaix diagram for the IrO 2(110) surface (with nine coverages from fully hydrogenated to fully oxygenated surfaces) is reconstructed using just 20 electronic structure relaxations, compared to approximately 90 using typical search methods. Similar efficiency is demonstrated for the MoS 2 surface.« less
NASA Astrophysics Data System (ADS)
Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe
2017-12-01
Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.
NASA Astrophysics Data System (ADS)
Hakulinen, T.; Klein, J.
2016-03-01
Two-photon (2P) microscopy based on tunable Ti:sapphire lasers has become a widespread tool for 3D imaging with sub-cellular resolution in living tissues. In recent years multi-photon microscopy with simpler fixed-wavelength femtosecond oscillators using Yb-doped tungstenates as gain material has raised increasing interest in life-sciences, because these lasers offer one order of magnitude more average power than Ti:sapphire lasers in the wavelength range around 1040 nm: Two-photon (2P) excitation of mainly red or yellow fluorescent dyes and proteins (e.g. YFP, mFruit series) simultaneously has been proven with a single IR laser wavelength. A new approach is to extend the usability of existing tunable Titanium sapphire lasers by adding a fixed IR wavelength with an Yb femtosecond oscillator. By that means a multitude of applications for multimodal imaging and optogenetics can be supported. Furthermore fs Yb-lasers are available with a repetition rate of typically 10 MHz and an average power of typically 5 W resulting in pulse energy of typically 500 nJ, which is comparably high for fs-oscillators. This makes them an ideal tool for two-photon spinning disk laser scanning microscopy and holographic patterning for simultaneous photoactivation of large cell populations. With this work we demonstrate that economical, small-footprint Yb fixed-wavelength lasers can present an interesting add-on to tunable lasers that are commonly used in multiphoton microscopy. The Yb fs-lasers hereby offer higher power for imaging of red fluorescent dyes and proteins, are ideally enhancing existing Ti:sapphire lasers with more power in the IR, and are supporting pulse energy and power hungry applications such as spinning disk microscopy and holographic patterning.
A Mass Spectrometry Proteomics Data Management Platform*
Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael
2012-01-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296
Moro, Erik A; Todd, Michael D; Puckett, Anthony D
2012-09-20
In static tests, low-power (<5 mW) white light extrinsic Fabry-Perot interferometric position sensors offer high-accuracy (μm) absolute measurements of a target's position over large (cm) axial-position ranges, and since position is demodulated directly from phase in the interferogram, these sensors are robust to fluctuations in measured power levels. However, target surface dynamics distort the interferogram via Doppler shifting, introducing a bias in the demodulation process. With typical commercial off-the-shelf hardware, a broadband source centered near 1550 nm, and an otherwise typical setup, the bias may be as large as 50-100 μm for target surface velocities as low as 0.1 mm/s. In this paper, the authors derive a model for this Doppler-induced position bias, relating its magnitude to three swept-filter tuning parameters. Target velocity (magnitude and direction) is calculated using this relationship in conjunction with a phase-diversity approach, and knowledge of the target's velocity is then used to compensate exactly for the position bias. The phase-diversity approach exploits side-by-side measurement signals, transmitted through separate swept filters with distinct tuning parameters, and permits simultaneous measurement of target velocity and target position, thereby mitigating the most fundamental performance limitation that exists on dynamic white light interferometric position sensors.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
Gladysz, Rafaela; Cleenewerck, Matthias; Joossens, Jurgen; Lambeir, Anne-Marie; Augustyns, Koen; Van der Veken, Pieter
2014-10-13
Fragment-based drug discovery (FBDD) has evolved into an established approach for "hit" identification. Typically, most applications of FBDD depend on specialised cost- and time-intensive biophysical techniques. The substrate activity screening (SAS) approach has been proposed as a relatively cheap and straightforward alternative for identification of fragments for enzyme inhibitors. We have investigated SAS for the discovery of inhibitors of oncology target urokinase (uPA). Although our results support the key hypotheses of SAS, we also encountered a number of unreported limitations. In response, we propose an efficient modified methodology: "MSAS" (modified substrate activity screening). MSAS circumvents the limitations of SAS and broadens its scope by providing additional fragments and more coherent SAR data. As well as presenting and validating MSAS, this study expands existing SAR knowledge for the S1 pocket of uPA and reports new reversible and irreversible uPA inhibitor scaffolds. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
How pleasant sounds promote and annoying sounds impede health: a cognitive approach.
Andringa, Tjeerd C; Lanser, J Jolie L
2013-04-08
This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research.
Contact detection for nanomanipulation in a scanning electron microscope.
Ru, Changhai; To, Steve
2012-07-01
Nanomanipulation systems require accurate knowledge of the end-effector position in all three spatial coordinates, XYZ, for reliable manipulation of nanostructures. Although the images acquired by a scanning electron microscope (SEM) provide high resolution XY information, the lack of depth information in the Z-direction makes 3D nanomanipulation time-consuming. Existing approaches for contact detection of end-effectors inside SEM typically utilize fragile touch sensors that are difficult to integrate into a nanomanipulation system. This paper presents a method for determining the contact between an end-effector and a target surface during nanomanipulation inside SEM, purely based on the processing of SEM images. A depth-from-focus method is used in the fast approach of the end-effector to the substrate, followed by fine contact detection. Experimental results demonstrate that the contact detection approach is capable of achieving an accuracy of 21.5 nm at 50,000× magnification while inducing little end-effector damage. Copyright © 2012 Elsevier B.V. All rights reserved.
Bioinspired Methodology for Artificial Olfaction
Raman, Baranidharan; Hertz, Joshua L.; Benkstein, Kurt D.; Semancik, Steve
2008-01-01
Artificial olfaction is a potential tool for noninvasive chemical monitoring. Application of “electronic noses” typically involves recognition of “pretrained” chemicals, while long-term operation and generalization of training to allow chemical classification of “unknown” analytes remain challenges. The latter analytical capability is critically important, as it is unfeasible to pre-expose the sensor to every analyte it might encounter. Here, we demonstrate a biologically inspired approach where the recognition and generalization problems are decoupled and resolved in a hierarchical fashion. Analyte composition is refined in a progression from general (e.g., target is a hydrocarbon) to precise (e.g., target is ethane), using highly optimized response features for each step. We validate this approach using a MEMS-based chemiresistive microsensor array. We show that this approach, a unique departure from existing methodologies in artificial olfaction, allows the recognition module to better mitigate sensor-aging effects and to better classify unknowns, enhancing the utility of chemical sensors for real-world applications. PMID:18855409
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Fatigue assessment of an existing steel bridge by finite element modelling and field measurements
NASA Astrophysics Data System (ADS)
Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.
2017-05-01
The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.
ERIC Educational Resources Information Center
Wilson, Marian L.
A study was conducted to determine if a disparity exists between the familial and occupational attitudes of women in typical and atypical careers. Questionnaire responses of 225 undergraduate women in three typical careers (home economics, nursing, and elementary education) and three atypical careers (engineering, pharmacy, and agriculture)…
31. INTERIOR VIEW OF TYPICAL HALLWAY AND STAIRWAY IN CENTER ...
31. INTERIOR VIEW OF TYPICAL HALLWAY AND STAIRWAY IN CENTER WING OF TECHWOOD DORMITORY. EXISTING DOORS ARE REPLACEMENTS OF ORIGINAL PANEL DOORS. - Techwood Homes, McDaniel Dormitory, 581-587 Techwood Drive, Atlanta, Fulton County, GA
Energy Savings Measure Packages. Existing Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casey, Sean; Booten, Chuck
2011-11-01
This document presents the most cost effective Energy Savings Measure Packages (ESMP) for existing mixed-fuel and all electric homes to achieve 15% and 30% savings for each BetterBuildings grantee location across the United States. These packages are optimized for minimum cost to homeowners for source energy savings given the local climate and prevalent building characteristics (i.e. foundation types). Maximum cost savings are typically found between 30% and 50% energy savings over the reference home; this typically amounts to $300 - $700/year.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Environmental degradation and remediation: is economics part of the problem?
Dore, Mohammed H I; Burton, Ian
2003-01-01
It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.
An Ontology-based Architecture for Integration of Clinical Trials Management Applications
Shankar, Ravi D.; Martins, Susana B.; O’Connor, Martin; Parrish, David B.; Das, Amar K.
2007-01-01
Management of complex clinical trials involves coordinated-use of a myriad of software applications by trial personnel. The applications typically use distinct knowledge representations and generate enormous amount of information during the course of a trial. It becomes vital that the applications exchange trial semantics in order for efficient management of the trials and subsequent analysis of clinical trial data. Existing model-based frameworks do not address the requirements of semantic integration of heterogeneous applications. We have built an ontology-based architecture to support interoperation of clinical trial software applications. Central to our approach is a suite of clinical trial ontologies, which we call Epoch, that define the vocabulary and semantics necessary to represent information on clinical trials. We are continuing to demonstrate and validate our approach with different clinical trials management applications and with growing number of clinical trials. PMID:18693919
NASA Astrophysics Data System (ADS)
Hogri, Roni; Bamford, Simeon A.; Taub, Aryeh H.; Magal, Ari; Giudice, Paolo Del; Mintz, Matti
2015-02-01
Neuroprostheses could potentially recover functions lost due to neural damage. Typical neuroprostheses connect an intact brain with the external environment, thus replacing damaged sensory or motor pathways. Recently, closed-loop neuroprostheses, bidirectionally interfaced with the brain, have begun to emerge, offering an opportunity to substitute malfunctioning brain structures. In this proof-of-concept study, we demonstrate a neuro-inspired model-based approach to neuroprostheses. A VLSI chip was designed to implement essential cerebellar synaptic plasticity rules, and was interfaced with cerebellar input and output nuclei in real time, thus reproducing cerebellum-dependent learning in anesthetized rats. Such a model-based approach does not require prior system identification, allowing for de novo experience-based learning in the brain-chip hybrid, with potential clinical advantages and limitations when compared to existing parametric ``black box'' models.
Chemical alternatives assessment: the case of flame retardants.
Howard, Gregory J
2014-12-01
Decisions on chemical substitution are made rapidly and by many stakeholders; these decisions may have a direct impact on consumer exposures, and, when a hazard exists, to consumer risks. Flame retardants (FRs) represent particular challenges, including very high production volumes, designed-in persistence, and often direct consumer exposure. Newer FR products, as with other industrial chemicals, typically lack data on hazard and exposure, and in many cases even basic information on structure and use in products is unknown. Chemical alternatives assessment (CAA) provides a hazard-focused approach to distinguishing between possible substitutions; variations on this process are used by several government and numerous corporate entities. By grouping chemicals according to functional use, some information on exposure potential can be inferred, allowing for decisions based on those hazard properties that are most distinguishing. This approach can help prevent the "regrettable substitution" of one chemical with another of equal, or even higher, risk. Copyright © 2014 Elsevier Ltd. All rights reserved.
Play therapy: a case-based example of a nondirective approach.
Lawver, Timothy; Blankenship, Kelly
2008-10-01
Play therapy is a treatment modality in which the therapist engages in play with the child. Its use has been documented in a variety of settings and with a variety of diagnoses. Treating within the context of play brings the therapist and the therapy to the level of the child. By way of an introduction to this approach, a case is presented of a six-year-old boy with oppositional defiant disorder. The presentation focuses on the events and interactions of a typical session with an established patient. The primary issues of the session are aggression, self worth, and self efficacy. These themes manifest themselves through the content of the child's play and narration of his actions. The therapist then reflects these back to the child while gently encouraging the child toward more positive play. Though the example is one of nondirective play therapy, a wide range of variation exists under the heading of play therapy.
Climate change and sustainable development: realizing the opportunity.
Robinson, John; Bradley, Mike; Busby, Peter; Connor, Denis; Murray, Anne; Sampson, Bruce; Soper, Wayne
2006-02-01
Manifold linkages exist between climate change and sustainable development. Although these are starting to receive attention in the climate exchange literature, the focus has typically been on examining sustainable development through a climate change lens, rather than vice versa. And there has been little systematic examination of how these linkages may be fostered in practice. This paper examines climate change through a sustainable development lens. To illustrate how this might change the approach to climate change issues, it reports on the findings of a panel of business, local government, and academic representatives in British Columbia, Canada, who were appointed to advise the provincial government on climate change policy. The panel found that sustainable development may offer a significantly more fruitful way to pursue climate policy goals than climate policy itself. The paper discusses subsequent climate change developments in the province and makes suggestions as how best to pursue such a sustainability approach in British Columbia and other jurisdictions.
A Social-Interactive Neuroscience Approach to Understanding the Developing Brain.
Redcay, Elizabeth; Warnell, Katherine Rice
2018-01-01
From birth onward, social interaction is central to our everyday lives. Our ability to seek out social partners, flexibly navigate and learn from social interactions, and develop social relationships is critically important for our social and cognitive development and for our mental and physical health. Despite the importance of our social interactions, the neurodevelopmental bases of such interactions are underexplored, as most research examines social processing in noninteractive contexts. We begin this chapter with evidence from behavioral work and adult neuroimaging studies demonstrating how social-interactive context fundamentally alters cognitive and neural processing. We then highlight four brain networks that play key roles in social interaction and, drawing on existing developmental neuroscience literature, posit the functional roles these networks may play in social-interactive development. We conclude by discussing how a social-interactive neuroscience approach holds great promise for advancing our understanding of both typical and atypical social development. © 2018 Elsevier Inc. All rights reserved.
Creation of digital contours that approach the characteristics of cartographic contours
Tyler, Dean J.; Greenlee, Susan K.
2012-01-01
The capability to easily create digital contours using commercial off-the-shelf (COTS) software has existed for decades. Out-of-the-box raw contours are suitable for many scientific applications without pre- or post-processing; however, cartographic applications typically require additional improvements. For example, raw contours generally require smoothing before placement on a map. Cartographic contours must also conform to certain spatial/logical rules; for example, contours may not cross waterbodies. The objective was to create contours that match as closely as possible the cartographic contours produced by manual methods on the 1:24,000-scale, 7.5-minute Topographic Map series. This report outlines the basic approach, describes a variety of problems that were encountered, and discusses solutions. Many of the challenges described herein were the result of imperfect input raster elevation data and the requirement to have the contours integrated with hydrographic features from the National Hydrography Dataset (NHD).
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Dalenberg, Jelle R; Nanetti, Luca; Renken, Remco J; de Wijk, René A; Ter Horst, Gert J
2014-01-01
Consumers show high interindividual variability in food liking during repeated exposure. To investigate consumer liking during repeated exposure, data is often interpreted on a product level by averaging results over all consumers. However, a single product may elicit inconsistent behaviors in consumers; averaging will mix and hide possible subgroups of consumer behaviors, leading to a misinterpretation of the results. To deal with the variability in consumer liking, we propose to use clustering on data from consumer-product combinations to investigate the nature of the behavioral differences within the complete dataset. The resulting behavioral clusters can then be used to describe product acceptance. To test this approach we used two independent data sets in which young adults were repeatedly exposed to drinks and snacks, respectively. We found that five typical consumer behaviors existed in both datasets. These behaviors differed both in the average level of liking as well as its temporal dynamics. By investigating the distribution of a single product across typical consumer behaviors, we provide more precise insight in how consumers divide in subgroups based on their product liking (i.e. product modality). This work shows that taking into account and using interindividual differences can unveil information about product acceptance that would otherwise be ignored.
Dalenberg, Jelle R.; Nanetti, Luca; Renken, Remco J.; de Wijk, René A.; ter Horst, Gert J.
2014-01-01
Consumers show high interindividual variability in food liking during repeated exposure. To investigate consumer liking during repeated exposure, data is often interpreted on a product level by averaging results over all consumers. However, a single product may elicit inconsistent behaviors in consumers; averaging will mix and hide possible subgroups of consumer behaviors, leading to a misinterpretation of the results. To deal with the variability in consumer liking, we propose to use clustering on data from consumer-product combinations to investigate the nature of the behavioral differences within the complete dataset. The resulting behavioral clusters can then be used to describe product acceptance. To test this approach we used two independent data sets in which young adults were repeatedly exposed to drinks and snacks, respectively. We found that five typical consumer behaviors existed in both datasets. These behaviors differed both in the average level of liking as well as its temporal dynamics. By investigating the distribution of a single product across typical consumer behaviors, we provide more precise insight in how consumers divide in subgroups based on their product liking (i.e. product modality). This work shows that taking into account and using interindividual differences can unveil information about product acceptance that would otherwise be ignored. PMID:24667832
14. Detail, typical approach span fixed bearing atop stone masonry ...
14. Detail, typical approach span fixed bearing atop stone masonry pier, view to northwest, 210mm lens. - Southern Pacific Railroad Shasta Route, Bridge No. 210.52, Milepost 210.52, Tehama, Tehama County, CA
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
Research of communication quality assessment algorithm according to the standard G3-PLC
NASA Astrophysics Data System (ADS)
Chebotayev, Pavel; Klimenko, Aleksey; Myakochin, Yuri; Polyakov, Igor; Shelupanov, Alexander; Urazayev, Damir; Zykov, Dmitriy
2017-11-01
The present paper deals with the quality assessment of PLC channel which is a part of fault-tolerant self-organizing heterogeneous communication system. The PLC implementation allows to reduce exploitation costs when constructing new info-communication networks. PLC is used for transmitting information between various devices in alternating current mains. There exist different approaches to transfer information over power lines. Their differences resulted from the requirements of typical apps which use PLC as a data transmission channel. In the process of research described in this paper, the simulation of a signal in AC mains with regard to different kinds of noise caused by power line loads was performed.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Design and Analysis of a Stiffened Composite Fuselage Panel
NASA Technical Reports Server (NTRS)
Dickson, J. N.; Biggers, S. B.
1980-01-01
A stiffened composite panel has been designed that is representative of the fuselage structure of existing wide bodied aircraft. The panel is a minimum weight design, based on the current level of technology and realistic loads and criteria. Several different stiffener configurations were investigated in the optimization process. The final configuration is an all graphite epoxy J-stiffened design in which the skin between adjacent stiffeners is permitted to buckle under design loads. Fail-safe concepts typically employed in metallic fuselage structure have been incorporated in the design. A conservative approach has been used with regard to structural details such as skin frame and stringer frame attachments and other areas where sufficient design data was not available.
A note on adding viscoelasticity to earthquake simulators
Pollitz, Fred
2017-01-01
Here, I describe how time‐dependent quasi‐static stress transfer can be implemented in an earthquake simulator code that is used to generate long synthetic seismicity catalogs. Most existing seismicity simulators use precomputed static stress interaction coefficients to rapidly implement static stress transfer in fault networks with typically tens of thousands of fault patches. The extension to quasi‐static deformation, which accounts for viscoelasticity of Earth’s ductile lower crust and mantle, involves the precomputation of additional interaction coefficients that represent time‐dependent stress transfer among the model fault patches, combined with defining and evolving additional state variables that track this stress transfer. The new approach is illustrated with application to a California‐wide synthetic fault network.
Multiscale corner detection and classification using local properties and semantic patterns
NASA Astrophysics Data System (ADS)
Gallo, Giovanni; Giuoco, Alessandro L.
2002-05-01
A new technique to detect, localize and classify corners in digital closed curves is proposed. The technique is based on correct estimation of support regions for each point. We compute multiscale curvature to detect and to localize corners. As a further step, with the aid of some local features, it's possible to classify corners into seven distinct types. Classification is performed using a set of rules, which describe corners according to preset semantic patterns. Compared with existing techniques, the proposed approach inscribes itself into the family of algorithms that try to explain the curve, instead of simple labeling. Moreover, our technique works in manner similar to what is believed are typical mechanisms of human perception.
Tourism climate and thermal comfort in Sun Moon Lake, Taiwan.
Lin, Tzu-Ping; Matzarakis, Andreas
2008-03-01
Bioclimate conditions at Sun Moon Lake, one of Taiwan's most popular tourist destinations, are presented. Existing tourism-related climate is typically based on mean monthly conditions of air temperature and precipitation and excludes the thermal perception of tourists. This study presents a relatively more detailed analysis of tourism climate by using a modified thermal comfort range for both Taiwan and Western/Middle European conditions, presented by frequency analysis of 10-day intervals. Furthermore, an integrated approach (climate tourism information scheme) is applied to present the frequencies of each facet under particular criteria for each 10-day interval, generating a time-series of climate data with temporal resolution for tourists and tourism authorities.
Tourism climate and thermal comfort in Sun Moon Lake, Taiwan
NASA Astrophysics Data System (ADS)
Lin, Tzu-Ping; Matzarakis, Andreas
2008-03-01
Bioclimate conditions at Sun Moon Lake, one of Taiwan’s most popular tourist destinations, are presented. Existing tourism-related climate is typically based on mean monthly conditions of air temperature and precipitation and excludes the thermal perception of tourists. This study presents a relatively more detailed analysis of tourism climate by using a modified thermal comfort range for both Taiwan and Western/Middle European conditions, presented by frequency analysis of 10-day intervals. Furthermore, an integrated approach (climate tourism information scheme) is applied to present the frequencies of each facet under particular criteria for each 10-day interval, generating a time-series of climate data with temporal resolution for tourists and tourism authorities.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattari, Sulimon, E-mail: ssattari2@ucmerced.edu; Chen, Qianting, E-mail: qchen2@ucmerced.edu; Mitchell, Kevin A., E-mail: kmitchell@ucmerced.edu
Topological approaches to mixing are important tools to understand chaotic fluid flows, ranging from oceanic transport to the design of micro-mixers. Typically, topological entropy, the exponential growth rate of material lines, is used to quantify topological mixing. Computing topological entropy from the direct stretching rate is computationally expensive and sheds little light on the source of the mixing. Earlier approaches emphasized that topological entropy could be viewed as generated by the braiding of virtual, or “ghost,” rods stirring the fluid in a periodic manner. Here, we demonstrate that topological entropy can also be viewed as generated by the braiding ofmore » ghost rods following heteroclinic orbits instead. We use the machinery of homotopic lobe dynamics, which extracts symbolic dynamics from finite-length pieces of stable and unstable manifolds attached to fixed points of the fluid flow. As an example, we focus on the topological entropy of a bounded, chaotic, two-dimensional, double-vortex cavity flow. Over a certain parameter range, the topological entropy is primarily due to the braiding of a period-three orbit. However, this orbit does not explain the topological entropy for parameter values where it does not exist, nor does it explain the excess of topological entropy for the entire range of its existence. We show that braiding by heteroclinic orbits provides an accurate computation of topological entropy when the period-three orbit does not exist, and that it provides an explanation for some of the excess topological entropy when the period-three orbit does exist. Furthermore, the computation of symbolic dynamics using heteroclinic orbits has been automated and can be used to compute topological entropy for a general 2D fluid flow.« less
Using heteroclinic orbits to quantify topological entropy in fluid flows
NASA Astrophysics Data System (ADS)
Sattari, Sulimon; Chen, Qianting; Mitchell, Kevin A.
2016-03-01
Topological approaches to mixing are important tools to understand chaotic fluid flows, ranging from oceanic transport to the design of micro-mixers. Typically, topological entropy, the exponential growth rate of material lines, is used to quantify topological mixing. Computing topological entropy from the direct stretching rate is computationally expensive and sheds little light on the source of the mixing. Earlier approaches emphasized that topological entropy could be viewed as generated by the braiding of virtual, or "ghost," rods stirring the fluid in a periodic manner. Here, we demonstrate that topological entropy can also be viewed as generated by the braiding of ghost rods following heteroclinic orbits instead. We use the machinery of homotopic lobe dynamics, which extracts symbolic dynamics from finite-length pieces of stable and unstable manifolds attached to fixed points of the fluid flow. As an example, we focus on the topological entropy of a bounded, chaotic, two-dimensional, double-vortex cavity flow. Over a certain parameter range, the topological entropy is primarily due to the braiding of a period-three orbit. However, this orbit does not explain the topological entropy for parameter values where it does not exist, nor does it explain the excess of topological entropy for the entire range of its existence. We show that braiding by heteroclinic orbits provides an accurate computation of topological entropy when the period-three orbit does not exist, and that it provides an explanation for some of the excess topological entropy when the period-three orbit does exist. Furthermore, the computation of symbolic dynamics using heteroclinic orbits has been automated and can be used to compute topological entropy for a general 2D fluid flow.
Weakly supervised visual dictionary learning by harnessing image attributes.
Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang
2014-12-01
Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.
OPTIMIZATION OF MODERN DISPERSIVE RAMAN SPECTROMETERS FOR MOLECULAR SPECIATION OF ORGANICS IN WATER
Pesticides and industrial chemicals are typically complex organic molecules with multiple heteroatoms that can ionize in water. However, models for understanding the behavior of these chemicals in the environment typically assume that they exist exclusively as neutral species --...
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
NASA Astrophysics Data System (ADS)
Siddhanta, Soumik; Wróbel, Maciej S.; Barman, Ishan
2017-02-01
A quick, cost-effective method for detection of drugs of abuse in biological fluids would be of great value in healthcare, law enforcement, and home testing applications. The alarming rise in narcotics abuse has led to considerable focus on developing potent and versatile analytical tools that can address this societal problem. While laboratory testing plays a key role in the current detection of drug misuse and the evaluation of patients with drug induced intoxication, these typically require expensive reagents and trained personnel, and may take hours to complete. Thus, a significant unmet need is to engineer a facile method that can rapidly detect drugs with little sample preparation, especially the bound fraction that is typically dominant in the blood stream. Here we report an approach that combines the exquisite sensitivity of surface enhanced Raman spectroscopy (SERS) and a facile protein tethering mechanism to reliably detect four different classes of drugs, barbiturate, benzodiazepine, amphetamine and benzoylecgonine. The proposed approach harnesses the reliable and specific attachment of proteins to both drugs and nanoparticle to facilitate the enhancement of spectral markers that are sensitive to the presence of the drugs. In conjunction with chemometric tools, we have shown the ability to quantify these drugs lower than levels achievable by existing clinical immunoassays. Through molecular docking simulations, we also probe the mechanistic underpinnings of the protein tethering approach, opening the door to detection of a broad class of narcotics in biological fluids within a few minutes as well as for groundwater analysis and toxin detection.
ERIC Educational Resources Information Center
Graeber, Mary
The typical approach to the teaching of an elementary school science methods course for undergraduate students was compared with an experimental approach based upon activities appearing in the Conceptually Oriented Program in Elementary Science (COPES) teacher's guides. The typical approach was characterized by a coverage of many topics and a…
Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain
NASA Astrophysics Data System (ADS)
Krauß, Thomas; Fischer, Peter
2016-08-01
In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.
Developing a quality assurance program for online services.
Humphries, A W; Naisawald, G V
1991-01-01
A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas. PMID:1909197
Developing a quality assurance program for online services.
Humphries, A W; Naisawald, G V
1991-07-01
A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas.
Han, Bomie; Higgs, Richard E
2008-09-01
High-throughput HPLC-mass spectrometry (HPLC-MS) is routinely used to profile biological samples for potential protein markers of disease, drug efficacy and toxicity. The discovery technology has advanced to the point where translating hypotheses from proteomic profiling studies into clinical use is the bottleneck to realizing the full potential of these approaches. The first step in this translation is the development and analytical validation of a higher throughput assay with improved sensitivity and selectivity relative to typical profiling assays. Multiple reaction monitoring (MRM) assays are an attractive approach for this stage of biomarker development given their improved sensitivity and specificity, the speed at which the assays can be developed and the quantitative nature of the assay. While the profiling assays are performed with ion trap mass spectrometers, MRM assays are traditionally developed in quadrupole-based mass spectrometers. Development of MRM assays from the same instrument used in the profiling analysis enables a seamless and rapid transition from hypothesis generation to validation. This report provides guidelines for rapidly developing an MRM assay using the same mass spectrometry platform used for profiling experiments (typically ion traps) and reviews methodological and analytical validation considerations. The analytical validation guidelines presented are drawn from existing practices on immunological assays and are applicable to any mass spectrometry platform technology.
Feasibility of video codec algorithms for software-only playback
NASA Astrophysics Data System (ADS)
Rodriguez, Arturo A.; Morse, Ken
1994-05-01
Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.
LINEBACKER: LINE-speed Bio-inspired Analysis and Characterization for Event Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oehmen, Christopher S.; Bruillard, Paul J.; Matzke, Brett D.
2016-08-04
The cyber world is a complex domain, with digital systems mediating a wide spectrum of human and machine behaviors. While this is enabling a revolution in the way humans interact with each other and data, it also is exposing previously unreachable infrastructure to a worldwide set of actors. Existing solutions for intrusion detection and prevention that are signature-focused typically seek to detect anomalous and/or malicious activity for the sake of preventing or mitigating negative impacts. But a growing interest in behavior-based detection is driving new forms of analysis that move the emphasis from static indicators (e.g. rule-based alarms or tripwires)more » to behavioral indicators that accommodate a wider contextual perspective. Similar to cyber systems, biosystems have always existed in resource-constrained hostile environments where behaviors are tuned by context. So we look to biosystems as an inspiration for addressing behavior-based cyber challenges. In this paper, we introduce LINEBACKER, a behavior-model based approach to recognizing anomalous events in network traffic and present the design of this approach of bio-inspired and statistical models working in tandem to produce individualized alerting for a collection of systems. Preliminary results of these models operating on historic data are presented along with a plugin to support real-world cyber operations.« less
Integrating wetland connectivity into models for watershed ...
Geographically isolated wetlands (GIW), or wetlands embedded in uplands, exist along a spatial and temporal hydrologic connectivity continuum to downstream waters. Via these connections and disconnections, GIWs provide numerous hydrological, biogeochemical, and biological functions linked to human health and watershed-scale ecosystem services. Often, a clear demonstration of these functions and the individual and cumulative effects of GIWs on downstream waters is required for their protection or restoration. Measurements alone are typically too resource intensive to do this. In this presentation, we discuss the use of various modeling approaches to quantify the hydrologic connectivity of GIWs and their associated watershed-scale cumulative effects. Our goal is to improve the science behind understanding the functions and connectivity of GIWs via models that are complemented with various types of novel data. We synthesize what is meant by GIW connectivity and its broad significance to science and decision-making. We further discuss case studies that provide insights to diverse modeling approaches, with varying levels of complexity, for how to estimate GIW connectivity and associated watershed-scale impacts to hydrology. We finally provide insights to the key opportunities and priorities for integrating GIW connectivity into the next generation of models. Geographically isolated wetlands (GIW), or wetlands embedded in uplands, exist along a spatial and temporal h
New algorithms to represent complex pseudoknotted RNA structures in dot-bracket notation.
Antczak, Maciej; Popenda, Mariusz; Zok, Tomasz; Zurkowski, Michal; Adamiak, Ryszard W; Szachniuk, Marta
2018-04-15
Understanding the formation, architecture and roles of pseudoknots in RNA structures are one of the most difficult challenges in RNA computational biology and structural bioinformatics. Methods predicting pseudoknots typically perform this with poor accuracy, often despite experimental data incorporation. Existing bioinformatic approaches differ in terms of pseudoknots' recognition and revealing their nature. A few ways of pseudoknot classification exist, most common ones refer to a genus or order. Following the latter one, we propose new algorithms that identify pseudoknots in RNA structure provided in BPSEQ format, determine their order and encode in dot-bracket-letter notation. The proposed encoding aims to illustrate the hierarchy of RNA folding. New algorithms are based on dynamic programming and hybrid (combining exhaustive search and random walk) approaches. They evolved from elementary algorithm implemented within the workflow of RNA FRABASE 1.0, our database of RNA structure fragments. They use different scoring functions to rank dissimilar dot-bracket representations of RNA structure. Computational experiments show an advantage of new methods over the others, especially for large RNA structures. Presented algorithms have been implemented as new functionality of RNApdbee webserver and are ready to use at http://rnapdbee.cs.put.poznan.pl. mszachniuk@cs.put.poznan.pl. Supplementary data are available at Bioinformatics online.
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Estimating psycho-physiological state of a human by speech analysis
NASA Astrophysics Data System (ADS)
Ronzhin, A. L.
2005-05-01
Adverse effects of intoxication, fatigue and boredom could degrade performance of highly trained operators of complex technical systems with potentially catastrophic consequences. Existing physiological fitness for duty tests are time consuming, costly, invasive, and highly unpopular. Known non-physiological tests constitute a secondary task and interfere with the busy workload of the tested operator. Various attempts to assess the current status of the operator by processing of "normal operational data" often lead to excessive amount of computations, poorly justified metrics, and ambiguity of results. At the same time, speech analysis presents a natural, non-invasive approach based upon well-established efficient data processing. In addition, it supports both behavioral and physiological biometric. This paper presents an approach facilitating robust speech analysis/understanding process in spite of natural speech variability and background noise. Automatic speech recognition is suggested as a technique for the detection of changes in the psycho-physiological state of a human that typically manifest themselves by changes of characteristics of voice tract and semantic-syntactic connectivity of conversation. Preliminary tests have confirmed that the statistically significant correlation between the error rate of automatic speech recognition and the extent of alcohol intoxication does exist. In addition, the obtained data allowed exploring some interesting correlations and establishing some quantitative models. It is proposed to utilize this approach as a part of fitness for duty test and compare its efficiency with analyses of iris, face geometry, thermography and other popular non-invasive biometric techniques.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.
2003-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.
Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.
Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung
2010-11-01
Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.
Identifying Causal Variants at Loci with Multiple Signals of Association
Hormozdiari, Farhad; Kostem, Emrah; Kang, Eun Yong; Pasaniuc, Bogdan; Eskin, Eleazar
2014-01-01
Although genome-wide association studies have successfully identified thousands of risk loci for complex traits, only a handful of the biologically causal variants, responsible for association at these loci, have been successfully identified. Current statistical methods for identifying causal variants at risk loci either use the strength of the association signal in an iterative conditioning framework or estimate probabilities for variants to be causal. A main drawback of existing methods is that they rely on the simplifying assumption of a single causal variant at each risk locus, which is typically invalid at many risk loci. In this work, we propose a new statistical framework that allows for the possibility of an arbitrary number of causal variants when estimating the posterior probability of a variant being causal. A direct benefit of our approach is that we predict a set of variants for each locus that under reasonable assumptions will contain all of the true causal variants with a high confidence level (e.g., 95%) even when the locus contains multiple causal variants. We use simulations to show that our approach provides 20–50% improvement in our ability to identify the causal variants compared to the existing methods at loci harboring multiple causal variants. We validate our approach using empirical data from an expression QTL study of CHI3L2 to identify new causal variants that affect gene expression at this locus. CAVIAR is publicly available online at http://genetics.cs.ucla.edu/caviar/. PMID:25104515
Identifying causal variants at loci with multiple signals of association.
Hormozdiari, Farhad; Kostem, Emrah; Kang, Eun Yong; Pasaniuc, Bogdan; Eskin, Eleazar
2014-10-01
Although genome-wide association studies have successfully identified thousands of risk loci for complex traits, only a handful of the biologically causal variants, responsible for association at these loci, have been successfully identified. Current statistical methods for identifying causal variants at risk loci either use the strength of the association signal in an iterative conditioning framework or estimate probabilities for variants to be causal. A main drawback of existing methods is that they rely on the simplifying assumption of a single causal variant at each risk locus, which is typically invalid at many risk loci. In this work, we propose a new statistical framework that allows for the possibility of an arbitrary number of causal variants when estimating the posterior probability of a variant being causal. A direct benefit of our approach is that we predict a set of variants for each locus that under reasonable assumptions will contain all of the true causal variants with a high confidence level (e.g., 95%) even when the locus contains multiple causal variants. We use simulations to show that our approach provides 20-50% improvement in our ability to identify the causal variants compared to the existing methods at loci harboring multiple causal variants. We validate our approach using empirical data from an expression QTL study of CHI3L2 to identify new causal variants that affect gene expression at this locus. CAVIAR is publicly available online at http://genetics.cs.ucla.edu/caviar/. Copyright © 2014 by the Genetics Society of America.
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Logistics Reduction Technologies for Exploration Missions
NASA Technical Reports Server (NTRS)
Broyan, James L., Jr.; Ewert, Michael K.; Fink, Patrick W.
2014-01-01
Human exploration missions under study are very limited by the launch mass capacity of existing and planned vehicles. The logistical mass of crew items is typically considered separate from the vehicle structure, habitat outfitting, and life support systems. Consequently, crew item logistical mass is typically competing with vehicle systems for mass allocation. NASA's Advanced Exploration Systems (AES) Logistics Reduction and Repurposing (LRR) Project is developing five logistics technologies guided by a systems engineering cradle-to-grave approach to enable used crew items to augment vehicle systems. Specifically, AES LRR is investigating the direct reduction of clothing mass, the repurposing of logistical packaging, the use of autonomous logistics management technologies, the processing of spent crew items to benefit radiation shielding and water recovery, and the conversion of trash to propulsion gases. The systematic implementation of these types of technologies will increase launch mass efficiency by enabling items to be used for secondary purposes and improve the habitability of the vehicle as the mission duration increases. This paper provides a description and the challenges of the five technologies under development and the estimated overall mission benefits of each technology.
Athletes and blood clots: individualized, intermittent anticoagulation management.
Berkowitz, J N; Moll, S
2017-06-01
Essentials Athletes on anticoagulants are typically prohibited from participation in contact sports. Short-acting anticoagulants allow for reconsideration of this precedent. An individualized pharmacokinetic/pharmacodynamics study can aid patient-specific management. Many challenges and unresolved issues exist regarding such tailored intermittent dosing. Athletes with venous thromboembolism (VTE) are typically prohibited from participating in contact sports during anticoagulation therapy, but such mandatory removal from competition can cause psychological and financial detriments for athletes and overlooks patient autonomy. The precedent of compulsory removal developed when options for anticoagulation therapy were more limited, but medical advances now allow for rethinking of the management of athletes with VTE. We propose a novel therapeutic approach to the treatment of athletes who participate in contact sports and require anticoagulation. A personalized pharmacokinetic/pharmacodynamics study of a direct oral anticoagulant can be performed for an athlete, which can inform the timing of medication dosing. Managed carefully, this can allow athletic participation when plasma drug concentration is minimal (minimizing bleeding risk) and prompt resumption of treatment after the risk of bleeding sufficiently normalizes (maximizing therapeutic time). © 2017 International Society on Thrombosis and Haemostasis.
Force analysis of magnetic bearings with power-saving controls
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1992-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.
A programmable display layer for virtual reality system architectures.
Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd
2010-01-01
Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.
Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
Robust feedback zoom tracking for digital video surveillance.
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.
Aggregation of LoD 1 building models as an optimization problem
NASA Astrophysics Data System (ADS)
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
How Pleasant Sounds Promote and Annoying Sounds Impede Health: A Cognitive Approach
Andringa, Tjeerd C.; Lanser, J. Jolie L.
2013-01-01
This theoretical paper addresses the cognitive functions via which quiet and in general pleasurable sounds promote and annoying sounds impede health. The article comprises a literature analysis and an interpretation of how the bidirectional influence of appraising the environment and the feelings of the perceiver can be understood in terms of core affect and motivation. This conceptual basis allows the formulation of a detailed cognitive model describing how sonic content, related to indicators of safety and danger, either allows full freedom over mind-states or forces the activation of a vigilance function with associated arousal. The model leads to a number of detailed predictions that can be used to provide existing soundscape approaches with a solid cognitive science foundation that may lead to novel approaches to soundscape design. These will take into account that louder sounds typically contribute to distal situational awareness while subtle environmental sounds provide proximal situational awareness. The role of safety indicators, mediated by proximal situational awareness and subtle sounds, should become more important in future soundscape research. PMID:23567255
Corr, Philip J; Cooper, Andrew J
2016-11-01
We report the development and validation of a questionnaire measure of the revised reinforcement sensitivity theory (rRST) of personality. Starting with qualitative responses to defensive and approach scenarios modeled on typical rodent ethoexperimental situations, exploratory and confirmatory factor analyses (CFAs) revealed a robust 6-factor structure: 2 unitary defensive factors, fight-flight-freeze system (FFFS; related to fear) and the behavioral inhibition system (BIS; related to anxiety); and 4 behavioral approach system (BAS) factors (Reward Interest, Goal-Drive Persistence, Reward Reactivity, and Impulsivity). Theoretically motivated thematic facets were employed to sample the breadth of defensive space, comprising FFFS (Flight, Freeze, and Active Avoidance) and BIS (Motor Planning Interruption, Worry, Obsessive Thoughts, and Behavioral Disengagement). Based on theoretical considerations, and statistically confirmed, a separate scale for Defensive Fight was developed. Validation evidence for the 6-factor structure came from convergent and discriminant validity shown by correlations with existing personality scales. We offer the Reinforcement Sensitivity Theory of Personality Questionnaire to facilitate future research specifically on rRST and, more broadly, on approach-avoidance theories of personality. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Tips for daylighting with windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alastair; Selkowitz, Stephen
2013-10-01
These guidelines provide an integrated approach to the cost-effective design of perimeter zones in new commercial buildings and existing building retrofits. They function as a quick reference for building designers, through a set of easy steps and rules-of-thumb, emphasizing “how-to” practical details. References are given to more detailed sources of information, should the reader wish to go further. The design method used in this document emphasizes that building decisions should be made within the context of the whole building as a single functioning system rather than as an assembly of distinct parts. This integrated design approach looks at the ramificationsmore » of each individual system decision on the whole building. For example, the decision on glazing selection will have an effect on lighting, mechanical systems, and interior design. Therefore, the entire design team should participate and influence this glazing decision—which typically rests with the architect alone. The benefit of an integrated design approach is a greater chance of success towards long-term comfort and sustained energy savings in the building.« less
Fast approach for toner saving
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Kurilin, Ilya V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sangho; Choi, Donchul
2011-01-01
Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational costs. We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner saving for typical office documents up to 15-20%. Rate of toner saving is adjustable. Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to random placement of small holes inside the character regions. The suggested method automatically skips small fonts to preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned hardcopy successfully too.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Variability extraction and modeling for product variants.
Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander
2017-01-01
Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.
Cohan, Sharon L; Chavira, Denise A; Stein, Murray B
2006-11-01
There have been several reports of successful psychosocial interventions for children with selective mutism (SM), a disorder in which a child consistently fails to speak in one or more social settings (e.g., school) despite speaking normally in other settings (e.g., home). The present literature review was undertaken in order to provide an up-to-date summary and critique of the SM treatment literature published in the past fifteen years. PubMed, PsycINFO, and Web of Science databases were searched to identify SM treatment studies published in peer-reviewed journals between 1990 and 2005. A total of 23 studies were included in the present review. Of these, ten used a behavioral/cognitive behavioral approach, one used a behavioral language training approach, one used a family systems approach, five used a psychodynamic approach, and six used multimodal approaches to SM treatment. Although much of this literature is limited by methodological weaknesses, the existing research provides support for the use of behavioral and cognitive-behavioral interventions. Multimodal treatments also appear promising, but the essential components of these interventions have yet to be established. An outline of a cognitive-behavioral treatment package for a typical SM child is provided and the review concludes with suggestions for future research.
HemoVision: An automated and virtual approach to bloodstain pattern analysis.
Joris, Philip; Develter, Wim; Jenar, Els; Suetens, Paul; Vandermeulen, Dirk; Van de Voorde, Wim; Claes, Peter
2015-06-01
Bloodstain pattern analysis (BPA) is a subspecialty of forensic sciences, dealing with the analysis and interpretation of bloodstain patterns in crime scenes. The aim of BPA is uncovering new information about the actions that took place in a crime scene, potentially leading to a confirmation or refutation of a suspect's statement. A typical goal of BPA is to estimate the flight paths for a set of stains, followed by a directional analysis in order to estimate the area of origin for the stains. The traditional approach, referred to as stringing, consists of attaching a piece of string to each stain, and letting the string represent an approximation of the stain's flight path. Even though stringing has been used extensively, many (practical) downsides exist. We propose an automated and virtual approach, employing fiducial markers and digital images. By automatically reconstructing a single coordinate frame from several images, limited user input is required. Synthetic crime scenes were created and analysed in order to evaluate the approach. Results demonstrate the correct operation and practical advantages, suggesting that the proposed approach may become a valuable asset for practically analysing bloodstain spatter patterns. Accompanying software called HemoVision is currently provided as a demonstrator and will be further developed for practical use in forensic investigations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Jenkins, Daniel P; Salmon, Paul M; Stanton, Neville A; Walker, Guy H; Rafferty, Laura
2011-02-01
Understanding why an individual acted in a certain way is of fundamental importance to the human factors community, especially when the choice of action results in an undesirable outcome. This challenge is typically tackled by applying retrospective interview techniques to generate models of what happened, recording deviations from a 'correct procedure'. While such approaches may have great utility in tightly constrained procedural environments, they are less applicable in complex sociotechnical systems that require individuals to modify procedures in real time to respond to a changing environment. For complex sociotechnical systems, a formative approach is required that maps the information available to the individual and considers its impact on performance and action. A context-specific, activity-independent, constraint-based model forms the basis of this approach. To illustrate, an example of the Stockwell shooting is used, where an innocent man, mistaken for a suicide bomber, was shot dead. Transferable findings are then presented. STATEMENT OF RELEVANCE: This paper presents a new approach that can be applied proactively to consider how sociotechnical system design, and the information available to an individual, can affect their performance. The approach is proposed to be complementary to the existing tools in the mental models phase of the cognitive work analysis framework.
A Novel Group Decision-Making Method Based on Sensor Data and Fuzzy Information.
Bai, Yu-Ting; Zhang, Bai-Hai; Wang, Xiao-Yi; Jin, Xue-Bo; Xu, Ji-Ping; Su, Ting-Li; Wang, Zhao-Yang
2016-10-28
Algal bloom is a typical phenomenon of the eutrophication of rivers and lakes and makes the water dirty and smelly. It is a serious threat to water security and public health. Most scholars studying solutions for this pollution have studied the principles of remediation approaches, but few have studied the decision-making and selection of the approaches. Existing research uses simplex decision-making information which is highly subjective and uses little of the data from water quality sensors. To utilize these data and solve the rational decision-making problem, a novel group decision-making method is proposed using the sensor data with fuzzy evaluation information. Firstly, the optimal similarity aggregation model of group opinions is built based on the modified similarity measurement of Vague values. Secondly, the approaches' ability to improve the water quality indexes is expressed using Vague evaluation methods. Thirdly, the water quality sensor data are analyzed to match the features of the alternative approaches with grey relational degrees. This allows the best remediation approach to be selected to meet the current water status. Finally, the selection model is applied to the remediation of algal bloom in lakes. The results show this method's rationality and feasibility when using different data from different sources.
Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian
2018-05-01
Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Identifying novel drug indications through automated reasoning.
Tari, Luis; Vo, Nguyen; Liang, Shanshan; Patel, Jagruti; Baral, Chitta; Cai, James
2012-01-01
With the large amount of pharmacological and biological knowledge available in literature, finding novel drug indications for existing drugs using in silico approaches has become increasingly feasible. Typical literature-based approaches generate new hypotheses in the form of protein-protein interactions networks by means of linking concepts based on their cooccurrences within abstracts. However, this kind of approaches tends to generate too many hypotheses, and identifying new drug indications from large networks can be a time-consuming process. In this work, we developed a method that acquires the necessary facts from literature and knowledge bases, and identifies new drug indications through automated reasoning. This is achieved by encoding the molecular effects caused by drug-target interactions and links to various diseases and drug mechanism as domain knowledge in AnsProlog, a declarative language that is useful for automated reasoning, including reasoning with incomplete information. Unlike other literature-based approaches, our approach is more fine-grained, especially in identifying indirect relationships for drug indications. To evaluate the capability of our approach in inferring novel drug indications, we applied our method to 943 drugs from DrugBank and asked if any of these drugs have potential anti-cancer activities based on information on their targets and molecular interaction types alone. A total of 507 drugs were found to have the potential to be used for cancer treatments. Among the potential anti-cancer drugs, 67 out of 81 drugs (a recall of 82.7%) are indeed known cancer drugs. In addition, 144 out of 289 drugs (a recall of 49.8%) are non-cancer drugs that are currently tested in clinical trials for cancer treatments. These results suggest that our method is able to infer drug indications (original or alternative) based on their molecular targets and interactions alone and has the potential to discover novel drug indications for existing drugs.
Identifying Vulnerabilities and Hardening Attack Graphs for Networked Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Sudip; Vullinati, Anil K.; Halappanavar, Mahantesh
We investigate efficient security control methods for protecting against vulnerabilities in networked systems. A large number of interdependent vulnerabilities typically exist in the computing nodes of a cyber-system; as vulnerabilities get exploited, starting from low level ones, they open up the doors to more critical vulnerabilities. These cannot be understood just by a topological analysis of the network, and we use the attack graph abstraction of Dewri et al. to study these problems. In contrast to earlier approaches based on heuristics and evolutionary algorithms, we study rigorous methods for quantifying the inherent vulnerability and hardening cost for the system. Wemore » develop algorithms with provable approximation guarantees, and evaluate them for real and synthetic attack graphs.« less
Design and analysis of a stiffened composite fuselage panel
NASA Technical Reports Server (NTRS)
Dickson, J. N.; Biggers, S. B.
1980-01-01
The design and analysis of stiffened composite panel that is representative of the fuselage structure of existing wide bodied aircraft is discussed. The panel is a minimum weight design, based on the current level of technology and realistic loads and criteria. Several different stiffener configurations were investigated in the optimization process. The final configuration is an all graphite/epoxy J-stiffened design in which the skin between adjacent stiffeners is permitted to buckle under design loads. Fail safe concepts typically employed in metallic fuselage structure have been incorporated in the design. A conservative approach has been used with regard to structural details such as skin/frame and stringer/frame attachments and other areas where sufficient design data was not available.
Osman, Magda; Wiegmann, Alex
2017-03-01
In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.
An Approach for Peptide Identification by De Novo Sequencing of Mixture Spectra.
Liu, Yi; Ma, Bin; Zhang, Kaizhong; Lajoie, Gilles
2017-01-01
Mixture spectra occur quite frequently in a typical wet-lab mass spectrometry experiment, which result from the concurrent fragmentation of multiple precursors. The ability to efficiently and confidently identify mixture spectra is essential to alleviate the existent bottleneck of low mass spectra identification rate. However, most of the traditional computational methods are not suitable for interpreting mixture spectra, because they still take the assumption that the acquired spectra come from the fragmentation of a single precursor. In this manuscript, we formulate the mixture spectra de novo sequencing problem mathematically, and propose a dynamic programming algorithm for the problem. Additionally, we use both simulated and real mixture spectra data sets to verify the merits of the proposed algorithm.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
McAuliffe, G A; Takahashi, T; Orr, R J; Harris, P; Lee, M R F
2018-01-10
Life Cycle Assessment (LCA) of livestock production systems is often based on inventory data for farms typical of a study region. As information on individual animals is often unavailable, livestock data may already be aggregated at the time of inventory analysis, both across individual animals and across seasons. Even though various computational tools exist to consider the effect of genetic and seasonal variabilities in livestock-originated emissions intensity, the degree to which these methods can address the bias suffered by representative animal approaches is not well-understood. Using detailed on-farm data collected on the North Wyke Farm Platform (NWFP) in Devon, UK, this paper proposes a novel approach of life cycle impact assessment that complements the existing LCA methodology. Field data, such as forage quality and animal performance, were measured at high spatial and temporal resolutions and directly transferred into LCA processes. This approach has enabled derivation of emissions intensity for each individual animal and, by extension, its intra-farm distribution, providing a step towards reducing uncertainty related to agricultural production inherent in LCA studies for food. Depending on pasture management strategies, the total emissions intensity estimated by the proposed method was higher than the equivalent value recalculated using a representative animal approach by 0.9-1.7 kg CO 2 -eq/kg liveweight gain, or up to 10% of system-wide emissions. This finding suggests that emissions intensity values derived by the latter technique may be underestimated due to insufficient consideration given to poorly performing animals, whose emissions becomes exponentially greater as average daily gain decreases. Strategies to mitigate life-cycle environmental impacts of pasture-based beef productions systems are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bomberg, Mark; Gibson, Michael; Zhang, Jian
This article highlights the need for an active role for building physics in the development of near-zero energy buildings while analyzing an example of an integrated system for the upgrade of existing buildings. The science called either Building Physics in Europe or Building Science in North America has so far a passive role in explaining observed failures in construction practice. In its new role, it would be integrating modeling and testing to provide predictive capability, so much needed in the development of near-zero energy buildings. The authors attempt to create a compact package, applicable to different climates with small modificationsmore » of some hygrothermal properties of materials. This universal solution is based on a systems approach that is routine for building physics but in contrast to separately conceived sub-systems that are typical for the design of buildings today. One knows that the building structure, energy efficiency, indoor environmental quality, and moisture management all need to be considered to ensure durability of materials and control cost of near-zero energy buildings. These factors must be addressed through contributions of the whole design team. The same approach must be used for the retrofit of buildings. As this integrated design paradigm resulted from demands of sustainable built environment approach, building physics must drop its passive role and improve two critical domains of analysis: (i) linked, real-time hygrothermal and energy models capable of predicting the performance of existing buildings after renovation and (ii) basic methods of indoor environment and moisture management when the exterior of the building cannot be modified.« less
Memory for Sequences of Events Impaired in Typical Aging
ERIC Educational Resources Information Center
Allen, Timothy A.; Morris, Andrea M.; Stark, Shauna M.; Fortin, Norbert J.; Stark, Craig E. L.
2015-01-01
Typical aging is associated with diminished episodic memory performance. To improve our understanding of the fundamental mechanisms underlying this age-related memory deficit, we previously developed an integrated, cross-species approach to link converging evidence from human and animal research. This novel approach focuses on the ability to…
An experimental study of gully sidewall expansion
USDA-ARS?s Scientific Manuscript database
Soil erosion, in its myriad forms, devastates arable land and infrastructure and strains the balance between economic stability and viability. Gullies may form in existing channels or where no previous channel drainage existed. Typically, gullies are a result of a disequilibrium between the eroding ...
Flexible approach to vibrational sum-frequency generation using shaped near-infrared light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, Azhad U.; Liu, Fangjie; Watson, Brianna R.
We describe a new approach that expands the utility of vibrational sum-frequency generation (vSFG) spectroscopy using shaped near-infrared (NIR) laser pulses. Here, we demonstrate that arbitrary pulse shapes can be specified to match experimental requirements without the need for changes to the optical alignment. In this way, narrowband NIR pulses as long as 5.75 ps are readily generated, with a spectral resolution of about 2.5 cm -1, an improvement of approximately a factor of 3 compared to a typical vSFG system. Moreover, the utility of having complete control over the NIR pulse characteristics is demonstrated through nonresonant background suppression frommore » a metallic substrate by generating an etalon waveform in the pulse shaper. The flexibility afforded by switching between arbitrary NIR waveforms at the sample position with the same instrument geometry expands the type of samples that can be studied without extensive modifications to existing apparatuses or large investments in specialty optics.« less
Visual texture for automated characterisation of geological features in borehole televiewer imagery
NASA Astrophysics Data System (ADS)
Al-Sit, Waleed; Al-Nuaimy, Waleed; Marelli, Matteo; Al-Ataby, Ali
2015-08-01
Detailed characterisation of the structure of subsurface fractures is greatly facilitated by digital borehole logging instruments, the interpretation of which is typically time-consuming and labour-intensive. Despite recent advances towards autonomy and automation, the final interpretation remains heavily dependent on the skill, experience, alertness and consistency of a human operator. Existing computational tools fail to detect layers between rocks that do not exhibit distinct fracture boundaries, and often struggle characterising cross-cutting layers and partial fractures. This paper presents a novel approach to the characterisation of planar rock discontinuities from digital images of borehole logs. Multi-resolution texture segmentation and pattern recognition techniques utilising Gabor filters are combined with an iterative adaptation of the Hough transform to enable non-distinct, partial, distorted and steep fractures and layers to be accurately identified and characterised in a fully automated fashion. This approach has successfully detected fractures and layers with high detection accuracy and at a relatively low computational cost.
Embedding Human Expert Cognition Into Autonomous UAS Trajectory Planning.
Narayan, Pritesh; Meyer, Patrick; Campbell, Duncan
2013-04-01
This paper presents a new approach for the inclusion of human expert cognition into autonomous trajectory planning for unmanned aerial systems (UASs) operating in low-altitude environments. During typical UAS operations, multiple objectives may exist; therefore, the use of multicriteria decision aid techniques can potentially allow for convergence to trajectory solutions which better reflect overall mission requirements. In that context, additive multiattribute value theory has been applied to optimize trajectories with respect to multiple objectives. A graphical user interface was developed to allow for knowledge capture from a human decision maker (HDM) through simulated decision scenarios. The expert decision data gathered are converted into value functions and corresponding criteria weightings using utility additive theory. The inclusion of preferences elicited from HDM data within an automated decision system allows for the generation of trajectories which more closely represent the candidate HDM decision preferences. This approach has been demonstrated in this paper through simulation using a fixed-wing UAS operating in low-altitude environments.
DNA-Templated Pd Conductive Metallic Nanowires
NASA Astrophysics Data System (ADS)
Nguyen, K.; Monteverde, M.; Lyonnais, S.; Campidelli, S.; Bourgoin, J.-Ph.; Filoramo, A.
2008-10-01
Because of its unique recognition properties, its size and the sub-nanometric resolution, DNA is of particular interest for positioning and organizing nanomaterials. However, in DNA-directed nanoelectronic it can be envisioned to use DNA not only as a positioning scaffold, but also as a support for the conducting element. To ensure this function a metallization process is necessary and among the various DNA metallization methods the Pd based ones are of particular interest for carbon nanotube transistor connections. In this field, the major drawback of the existing methods is the fast kinetics of the process which lead to a stochastic growth. Here, we present a novel approach to DNA Pd metalization where the DNA molecule is previously deposited on a dry substrate in a typical nanodevice configuration. In our approach the progressive growth of nanowires is achieved by the slow and selective precipitation of PdO, followed by a subsequent reduction step. Thanks to this strategy we fabricated homogeneous, continuous and conductive Pd nanowires on the DNA scaffolds of very thin diameter (20-25 nm).
A comparative study of sensor fault diagnosis methods based on observer for ECAS system
NASA Astrophysics Data System (ADS)
Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli
2017-03-01
The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.
Network-driven design principles for neuromorphic systems.
Partzsch, Johannes; Schüffny, Rene
2015-01-01
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.
Network-driven design principles for neuromorphic systems
Partzsch, Johannes; Schüffny, Rene
2015-01-01
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079
Space and Time Partitioning with Hardware Support for Space Applications
NASA Astrophysics Data System (ADS)
Pinto, S.; Tavares, A.; Montenegro, S.
2016-08-01
Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.
Flexible approach to vibrational sum-frequency generation using shaped near-infrared light
Chowdhury, Azhad U.; Liu, Fangjie; Watson, Brianna R.; ...
2018-04-23
We describe a new approach that expands the utility of vibrational sum-frequency generation (vSFG) spectroscopy using shaped near-infrared (NIR) laser pulses. Here, we demonstrate that arbitrary pulse shapes can be specified to match experimental requirements without the need for changes to the optical alignment. In this way, narrowband NIR pulses as long as 5.75 ps are readily generated, with a spectral resolution of about 2.5 cm -1, an improvement of approximately a factor of 3 compared to a typical vSFG system. Moreover, the utility of having complete control over the NIR pulse characteristics is demonstrated through nonresonant background suppression frommore » a metallic substrate by generating an etalon waveform in the pulse shaper. The flexibility afforded by switching between arbitrary NIR waveforms at the sample position with the same instrument geometry expands the type of samples that can be studied without extensive modifications to existing apparatuses or large investments in specialty optics.« less
NASA Astrophysics Data System (ADS)
Leemann, S. C.; Wurtz, W. A.
2018-03-01
The MAX IV 3 GeV storage ring is presently being commissioned and crucial parameters such as machine functions, emittance, and stored current have either already been reached or are approaching their design specifications. Once the baseline performance has been achieved, a campaign will be launched to further improve the brightness and coherence of this storage ring for typical X-ray users. During recent years, several such improvements have been designed. Common to these approaches is that they attempt to improve the storage ring performance using existing hardware provided for the baseline design. Such improvements therefore present more short-term upgrades. In this paper, however, we investigate medium-term improvements assuming power supplies can be exchanged in an attempt to push the brightness and coherence of the storage ring to the limit of what can be achieved without exchanging the magnetic lattice itself. We outline optics requirements, the optics optimization process, and summarize achievable parameters and expected performance.
Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.
Hunter, Louise; Magill-Cuerden, Julia; McCourt, Christine
2015-08-01
to identify elements in the environment of a postnatal ward which impacted on the introduction of a breast-feeding support intervention. a concurrent, realist evaluation including practice observations and semi-structured interviews. a typical British maternity ward. five midwives and two maternity support workers were observed. Seven midwives and three maternity support workers were interviewed. Informed consent was obtained from all participants. Ethical approval was granted by the relevant authorities. a high level of non-compliance with the intervention was driven by a lack of time and staff, and the ward staffs׳ lack of control of the organisation of their time and space. This was compounded by a propensity towards task orientation, workload reduction and resistance to change - all of which supported the existing medical approach to care. Limited support for the intervention was underpinned by staff willingness to reconsider their views and a widespread frustration with current ways of working. this small, local study suggests that the environment and working conditions on a typical British postnatal ward present significant barriers to the introduction of breast-feeding support interventions requiring a relational approach to care. midwives and maternity support workers need to be able to control their time and space, and feel able to provide the relational care they perceive that women need, before breast-feeding support interventions can be successfully implemented in practice. Frustration with current ways of working, and a willingness to consider other approaches, could be harnessed to initiate change that would benefit health professionals and the women and families in their care. However, without appropriate leadership or facilitation for change, this could alternatively encourage learned helplessness and passive resistance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty
NASA Astrophysics Data System (ADS)
Brown, C.; Lall, U.; Siegfried, T.
2005-12-01
Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of decision-making a priori to hydrologic realizations and those made a posteriori on alternative allocation mechanisms. Outcomes are evaluated in terms of water productivity, net social benefit and equity. The performance of hydro-climate prediction modeling in each allocation mechanism will be assessed. Finally, year-to-year system performance and feedback pathways are explored. In this way, the system can be adaptively managed toward equitable and efficient water use.
Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger
2011-12-01
Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.
Enhancing Health-Care Services with Mixed Reality Systems
NASA Astrophysics Data System (ADS)
Stantchev, Vladimir
This work presents a development approach for mixed reality systems in health care. Although health-care service costs account for 5-15% of GDP in developed countries the sector has been remarkably resistant to the introduction of technology-supported optimizations. Digitalization of data storing and processing in the form of electronic patient records (EPR) and hospital information systems (HIS) is a first necessary step. Contrary to typical business functions (e.g., accounting or CRM) a health-care service is characterized by a knowledge intensive decision process and usage of specialized devices ranging from stethoscopes to complex surgical systems. Mixed reality systems can help fill the gap between highly patient-specific health-care services that need a variety of technical resources on the one side and the streamlined process flow that typical process supporting information systems expect on the other side. To achieve this task, we present a development approach that includes an evaluation of existing tasks and processes within the health-care service and the information systems that currently support the service, as well as identification of decision paths and actions that can benefit from mixed reality systems. The result is a mixed reality system that allows a clinician to monitor the elements of the physical world and to blend them with virtual information provided by the systems. He or she can also plan and schedule treatments and operations in the digital world depending on status information from this mixed reality.
Robust power spectral estimation for EEG data
Melman, Tamar; Victor, Jonathan D.
2016-01-01
Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041
ERIC Educational Resources Information Center
Arteche, Adriane; Chamorro-Premuzic, Tomas; Ackerman, Phillip; Furnham, Adrian
2009-01-01
Students (n = 328) from US and UK universities completed four self-report measures related to intellectual competence: typical intellectual engagement (TIE), openness to experience, self-assessed intelligence (SAI), and learning approaches. Confirmatory data reduction was used to examine the structure of TIE and supported five major factors:…
Kako, Mayumi; Hammad, Karen; Mitani, Satoko; Arbon, Paul
2018-04-01
This review was conducted to explore the literature to determine the availability, content, and evaluation of existing chemical, biological, radiological, and nuclear (CBRN) education programs for health professionals. An integrative review of the international literature describing disaster education for CBRN (2004-2016) was conducted. The following relevant databases were searched: Proquest, Pubmed, Science Direct, Scopus, Journals @ OVID, Google Scholar, Medline, and Ichuschi ver. 5 (Japanese database for health professionals). The search terms used were: "disaster," "chemical," "biological," "radiological," "nuclear," "CBRN," "health professional education," and "method." The following Medical Subject Headings (MeSH) terms, "education," "nursing," "continuing," "disasters," "disaster planning," and "bioterrorism," were used wherever possible and appropriate. The retrieved articles were narratively analyzed according to availability, content, and method. The content was thematically analyzed to provide an overview of the core content of the training. The literature search identified 619 potentially relevant articles for this study. Duplicates (n=104) were removed and 87 articles were identified for title review. In total, 67 articles were discarded, yielding 20 articles for all-text review, following 11 studies were retained for analysis, including one Japanese study. All articles published in English were from the USA, apart from the two studies located in Japan and Sweden. The most typical content in the selected literature was CBRN theory (n=11), followed by studies based on incident command (n=8), decontamination (n=7), disaster management (n=7), triage (n=7), personal protective equipment (PPE) use (n = 5), and post-training briefing (n=3). While the CBRN training course requires the participants to gain specific skills and knowledge, proposed training courses should be effectively constructed to include approaches such as scenario-based simulations, depending on the participants' needs. Kako M , Hammad K , Mitani S , Arbon P . Existing approaches to chemical, biological, radiological, and nuclear (CBRN) education and training for health professionals: findings from an integrative literature review. Prehosp Disaster Med. 2018;33(2):182-190.
Deguchi, K.; Hall, P.
2017-01-01
The present work is based on our recent discovery of a new class of exact coherent structures generated near the edge of quite general boundary layer flows. The structures are referred to as free-stream coherent structures and were found using a large Reynolds number asymptotic approach to describe equilibrium solutions of the Navier–Stokes equations. In this paper, first we present results for a new family of free-stream coherent structures existing at relatively large wavenumbers. The new results are consistent with our earlier theoretical result that such structures can generate larger amplitude wall streaks if and only if the local spanwise wavenumber is sufficiently small. In a Blasius boundary layer, the local wavenumber increases in the streamwise direction so the wall streaks can typically exist only over a finite interval. However, here it is shown that they can interact with wall curvature to produce exponentially growing Görtler vortices through the receptivity process by a novel nonparallel mechanism. The theoretical predictions found are confirmed by a hybrid numerical approach. In contrast with previous receptivity investigations, it is shown that the amplitude of the induced vortex is larger than the structures in the free-stream which generate it. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167574
In Vitro Toxicity Assessment Technique for Volatile ...
The U.S. Environmental Protection Agency is tasked with evaluating the human health, environmental, and wildlife effects of over 80,000 chemicals registered for use in the environment and commerce. The challenge is that sparse chemical data exists; traditional toxicity testing methods are slow, costly, involve animal studies, and cannot keep up with a chemical registry that typically grows by at least 1000 chemicals every year. In recent years, High Throughput Screening (HTS) has been used in order to prioritize chemicals for traditional toxicity screening or to complement traditional toxicity studies. HTS is an in vitro approach of rapidly assaying a large number of chemicals for biochemical activity using robotics and automation. However, no method currently exists for screening volatile chemicals such as air pollutants in a HTS fashion. Additionally, significant uncertainty regarding in vitro to in in vivo extrapolation (IVIVE) remains. An approach to bridge the IVIVE gap and the current lack of ability to screen volatile chemicals in a HTS fashion is by using a probe molecule (PrM) technique. The proposed technique uses chemicals with empirical human pharmacokinetic data as PrMs to study toxicity of molecules with no known data for gas-phase analysis. We are currently studying the xenobiotic-metabolizing enzyme CYP2A6 using transfected BEAS-2B bronchial epithelial cell line. The CYP2A6 pathway activity is studied by the formation of cotinine from nicot
Deguchi, K; Hall, P
2017-03-13
The present work is based on our recent discovery of a new class of exact coherent structures generated near the edge of quite general boundary layer flows. The structures are referred to as free-stream coherent structures and were found using a large Reynolds number asymptotic approach to describe equilibrium solutions of the Navier-Stokes equations. In this paper, first we present results for a new family of free-stream coherent structures existing at relatively large wavenumbers. The new results are consistent with our earlier theoretical result that such structures can generate larger amplitude wall streaks if and only if the local spanwise wavenumber is sufficiently small. In a Blasius boundary layer, the local wavenumber increases in the streamwise direction so the wall streaks can typically exist only over a finite interval. However, here it is shown that they can interact with wall curvature to produce exponentially growing Görtler vortices through the receptivity process by a novel nonparallel mechanism. The theoretical predictions found are confirmed by a hybrid numerical approach. In contrast with previous receptivity investigations, it is shown that the amplitude of the induced vortex is larger than the structures in the free-stream which generate it.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Reliability based design of the primary structure of oil tankers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, G.; Dogliani, M.; Guedes Soares, C.
1996-12-31
The present paper describes the reliability analysis carried out for two oil tanker-ships having comparable dimensions but different design. The scope of the analysis was to derive indications on the value of the reliability index obtained for existing, typical and well designed oil tankers, as well as to apply the tentative rule checking formulation developed within the CEC-funded SHIPREL Project. The checking formula was adopted to redesign the midships section of one of the considered ships, upgrading her in order to meet the target failure probability considered in the rule development process. The resulting structure, in view of an upgradingmore » of the steel grade in the central part of the deck, lead to a convenient reliability level. The results of the analysis clearly showed that a large scatter exists presently in the design safety levels of ships, even when the Classification Societies` unified requirements are satisfied. A reliability based approach for the calibration of the rules for the global strength of ships is therefore proposed, in order to assist designers and Classification Societies in the process of producing ships which are more optimized, with respect to ensured safety levels. Based on the work reported in the paper, the feasibility and usefulness of a reliability based approach in the development of ship longitudinal strength requirements has been demonstrated.« less
A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoemmen, Mark
2010-11-01
Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less
Polymer/Silicate Nanocomposites Developed for Improved Thermal Stability and Barrier Properties
NASA Technical Reports Server (NTRS)
Campbell, Sandi G.
2001-01-01
The nanoscale reinforcement of polymers is becoming an attractive means of improving the properties and stability of polymers. Polymer-silicate nanocomposites are a relatively new class of materials with phase dimensions typically on the order of a few nanometers. Because of their nanometer-size features, nanocomposites possess unique properties typically not shared by more conventional composites. Polymer-layered silicate nanocomposites can attain a certain degree of stiffness, strength, and barrier properties with far less ceramic content than comparable glass- or mineral-reinforced polymers. Reinforcement of existing and new polyimides by this method offers an opportunity to greatly improve existing polymer properties without altering current synthetic or processing procedures.
2011-01-01
Background Existing methods of predicting DNA-binding proteins used valuable features of physicochemical properties to design support vector machine (SVM) based classifiers. Generally, selection of physicochemical properties and determination of their corresponding feature vectors rely mainly on known properties of binding mechanism and experience of designers. However, there exists a troublesome problem for designers that some different physicochemical properties have similar vectors of representing 20 amino acids and some closely related physicochemical properties have dissimilar vectors. Results This study proposes a systematic approach (named Auto-IDPCPs) to automatically identify a set of physicochemical and biochemical properties in the AAindex database to design SVM-based classifiers for predicting and analyzing DNA-binding domains/proteins. Auto-IDPCPs consists of 1) clustering 531 amino acid indices in AAindex into 20 clusters using a fuzzy c-means algorithm, 2) utilizing an efficient genetic algorithm based optimization method IBCGA to select an informative feature set of size m to represent sequences, and 3) analyzing the selected features to identify related physicochemical properties which may affect the binding mechanism of DNA-binding domains/proteins. The proposed Auto-IDPCPs identified m=22 features of properties belonging to five clusters for predicting DNA-binding domains with a five-fold cross-validation accuracy of 87.12%, which is promising compared with the accuracy of 86.62% of the existing method PSSM-400. For predicting DNA-binding sequences, the accuracy of 75.50% was obtained using m=28 features, where PSSM-400 has an accuracy of 74.22%. Auto-IDPCPs and PSSM-400 have accuracies of 80.73% and 82.81%, respectively, applied to an independent test data set of DNA-binding domains. Some typical physicochemical properties discovered are hydrophobicity, secondary structure, charge, solvent accessibility, polarity, flexibility, normalized Van Der Waals volume, pK (pK-C, pK-N, pK-COOH and pK-a(RCOOH)), etc. Conclusions The proposed approach Auto-IDPCPs would help designers to investigate informative physicochemical and biochemical properties by considering both prediction accuracy and analysis of binding mechanism simultaneously. The approach Auto-IDPCPs can be also applicable to predict and analyze other protein functions from sequences. PMID:21342579
DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.
Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing
2018-02-01
Growing concerns about increasing rates of antibiotic resistance call for expanded and comprehensive global monitoring. Advancing methods for monitoring of environmental media (e.g., wastewater, agricultural waste, food, and water) is especially needed for identifying potential resources of novel antibiotic resistance genes (ARGs), hot spots for gene exchange, and as pathways for the spread of ARGs and human exposure. Next-generation sequencing now enables direct access and profiling of the total metagenomic DNA pool, where ARGs are typically identified or predicted based on the "best hits" of sequence searches against existing databases. Unfortunately, this approach produces a high rate of false negatives. To address such limitations, we propose here a deep learning approach, taking into account a dissimilarity matrix created using all known categories of ARGs. Two deep learning models, DeepARG-SS and DeepARG-LS, were constructed for short read sequences and full gene length sequences, respectively. Evaluation of the deep learning models over 30 antibiotic resistance categories demonstrates that the DeepARG models can predict ARGs with both high precision (> 0.97) and recall (> 0.90). The models displayed an advantage over the typical best hit approach, yielding consistently lower false negative rates and thus higher overall recall (> 0.9). As more data become available for under-represented ARG categories, the DeepARG models' performance can be expected to be further enhanced due to the nature of the underlying neural networks. Our newly developed ARG database, DeepARG-DB, encompasses ARGs predicted with a high degree of confidence and extensive manual inspection, greatly expanding current ARG repositories. The deep learning models developed here offer more accurate antimicrobial resistance annotation relative to current bioinformatics practice. DeepARG does not require strict cutoffs, which enables identification of a much broader diversity of ARGs. The DeepARG models and database are available as a command line version and as a Web service at http://bench.cs.vt.edu/deeparg .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puttagunta, Srikanth
National programs such as Home Performance with ENERGY STAR® and numerous other utility air sealing programs have brought awareness to homeowners of the benefits of energy efficiency retrofits. Yet, these programs tend to focus on the low-hanging fruit: air-sealing the thermal envelope and ductwork where accessible, switch to efficient lighting, and low-flow fixtures. At the other end of the spectrum, deep-energy retrofit programs are also being encouraged by various utilities across the country. While deep energy retrofits typically seek 50% energy savings, they are often quite costly and most applicable to gut-rehab projects. A significant potential for lowering energy usagemore » in existing homes lies between the low hanging fruit and deep energy retrofit approaches - retrofits that save approximately 30% in energy over the existing conditions. A key is to be non-intrusive with the efficiency measures so the retrofit projects can be accomplished in occupied homes. This cold climate retrofit project involved the design and optimization of a home in Connecticut that sought to improve energy savings by at least 30% (excluding solar PV) over the existing home's performance. This report documents the successful implementation of a cost-effective solution package that achieved performance greater than 30% over the pre-retrofit - what worked, what did not, and what improvements could be made.« less
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
MATE: Machine Learning for Adaptive Calibration Template Detection
Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried
2016-01-01
The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2016-04-29
We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less
NASA Astrophysics Data System (ADS)
Nadimpalli, Venkata K.; Nagy, Peter B.
2018-04-01
Ultrasonic Additive Manufacturing (UAM) is a solid-state layer by layer manufacturing process that utilizes vibration induced plastic deformation to form a metallurgical bond between a thin layer and an existing base structure. Due to the vibration based bonding mechanism, the quality of components at each layer depends on the geometry of the structure. In-situ monitoring during and between UAM manufacturing steps offers the potential for closed-loop control to optimize process parameters and to repair existing defects. One interface that is most prone to delamination is the base/build interface and often UAM component height and quality are limited by failure at the base/build interface. Low manufacturing temperatures and favorable orientation of typical interface defects in UAM make ultrasonic NDE an attractive candidate for online monitoring. Two approaches for in-situ NDE are discussed and the design of the monitoring system optimized so that the quality of UAM components is not affected by the addition of the NDE setup. Preliminary results from in-situ ultrasonic NDE indicate the potential to be utilized for online qualification, closed-loop control and offline certification of UAM components.
Three-dimensional hybrid grid generation using advancing front techniques
NASA Technical Reports Server (NTRS)
Steinbrenner, John P.; Noack, Ralph W.
1995-01-01
A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.
Development of high-speed balancing technology
NASA Technical Reports Server (NTRS)
Demuth, R.; Zorzi, E.
1981-01-01
An investigation into laser material removal showed that laser burns act in a manner typical of mechanical stress raisers causing a reduction in fatigue strength; the fatigue strength is lowered relative to the smooth specimen fatigue strength. Laser-burn zones were studied for four materials: Alloy Steel 4340, Stainless Steel 17-4 PH, Inconel 718, and Aluminum Alloy 6061-T6. Calculations were made of stress concentration factors K, for laser-burn grooves of each material type. A comparison was then made to experimentally determine the fatigue strength reduction factor. These calculations and comparisons indicated that, except for the 17-4 PH material, good agreement (a ratio of close to 1.0) existed between Kt and Kf. The performance of the 17-4 PH material has been attributed to early crack initiation due to the lower fatigue resistance of the soft, unaged laser-affected zone. Also covered in this report is the development, implementation, and testing of an influence coefficient approach to balancing a long, slender shaft under applied-torque conditions. Excellent correlation existed between the analytically predicted results and those data obtained from testing.
Nonblocking Clos networks of multiple ROADM rings for mega data centers.
Zhao, Li; Ye, Tong; Hu, Weisheng
2015-11-02
Optical networks have been introduced to meet the bandwidth requirement of mega data centers (DC). Most existing approaches are neither scalable to face the massive growth of DCs, nor contention-free enough to provide full bisection bandwidth. To solve this problem, we propose two symmetric network structures: ring-MEMS-ring (RMR) network and MEMS-ring-MEMS (MRM) network based on classical Clos theory. New strategies are introduced to overcome the additional wavelength constraints that did not exist in the traditional Clos network. Two structures that followed the strategies can enable high scalability and nonblocking property simultaneously. The one-to-one correspondence of the RMR and MRM structures to a Clos is verified and the nonblocking conditions are given along with the routing algorithms. Compared to a typical folded-Clos network, both structures are more readily scalable to future mega data centers with 51200 racks while reducing number of long cables significantly. We show that the MRM network is more cost-effective than the RMR network, since the MRM network does not need tunable lasers to achieve nonblocking routing.
Identification and ranking of environmental threats with ecosystem vulnerability distributions.
Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo
2017-08-24
Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
ERIC Educational Resources Information Center
Tomeny, Theodore S.; Barry, Tammy D.; Bader, Stephanie H.
2012-01-01
Existing literature regarding the adjustment of siblings of children with an autism spectrum disorder (ASD) remains inconclusive, with some studies showing positive adjustment, others showing negative adjustment, and others showing no difference when compared to siblings of typically-developing children. For the current study, 42 parents of a…
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
Attention and Word Learning in Toddlers Who Are Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Montemarano, Elizabeth A.
2016-01-01
The purpose of this study was to examine attention allocation in toddlers who were late talkers and toddlers with typical language development while they were engaged in a word-learning task in order to determine if differences exist. Two-year-olds who were late talkers (11) and typically developing toddlers (11) were taught twelve novel…
Applying ecological concepts to the management of widespread grass invasions [Chapter 7
Carla M. D' Antonio; Jeanne C. Chambers; Rhonda Loh; J. Tim Tunison
2009-01-01
The management of plant invasions has typically focused on the removal of invading populations or control of existing widespread species to unspecified but lower levels. Invasive plant management typically has not involved active restoration of background vegetation to reduce the likelihood of invader reestablishment. Here, we argue that land managers could benefit...
NASA Astrophysics Data System (ADS)
Blutner, Reinhard
2009-03-01
Recently, Gerd Niestegge developed a new approach to quantum mechanics via conditional probabilities developing the well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization. I will apply his powerful and rigorous approach to the treatment of concepts using a geometrical model of meaning. In this model, instances are treated as vectors of a Hilbert space H. In the present approach there are at least two possibilities to form categories. The first possibility sees categories as a mixture of its instances (described by a density matrix). In the simplest case we get the classical probability theory including the Bayesian formula. The second possibility sees categories formed by a distinctive prototype which is the superposition of the (weighted) instances. The construction of prototypes can be seen as transferring a mixed quantum state into a pure quantum state freezing the probabilistic characteristics of the superposed instances into the structure of the formed prototype. Closely related to the idea of forming concepts by prototypes is the existence of interference effects. Such inference effects are typically found in macroscopic quantum systems and I will discuss them in connection with several puzzles of bounded rationality. The present approach nicely generalizes earlier proposals made by authors such as Diederik Aerts, Andrei Khrennikov, Ricardo Franco, and Jerome Busemeyer. Concluding, I will suggest that an active dialogue between cognitive approaches to logic and semantics and the modern approach of quantum information science is mandatory.
Noise Modeling From Conductive Shields Using Kirchhoff Equations.
Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J
2010-10-09
Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.
Robust Feedback Zoom Tracking for Digital Video Surveillance
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388
BioTAP: A Systematic Approach to Teaching Scientific Writing and Evaluating Undergraduate Theses
ERIC Educational Resources Information Center
Reynolds, Julie; Smith, Robin; Moskovitz, Cary; Sayle, Amy
2009-01-01
Undergraduate theses and other capstone research projects are standard features of many science curricula, but participation has typically been limited to only the most advanced and highly motivated students. With the recent push to engage more undergraduates in research, some faculty are finding that their typical approach to working with thesis…
Efficient and effective pruning strategies for health data de-identification.
Prasser, Fabian; Kohlmayer, Florian; Kuhn, Klaus A
2016-04-30
Privacy must be protected when sensitive biomedical data is shared, e.g. for research purposes. Data de-identification is an important safeguard, where datasets are transformed to meet two conflicting objectives: minimizing re-identification risks while maximizing data quality. Typically, de-identification methods search a solution space of possible data transformations to find a good solution to a given de-identification problem. In this process, parts of the search space must be excluded to maintain scalability. The set of transformations which are solution candidates is typically narrowed down by storing the results obtained during the search process and then using them to predict properties of the output of other transformations in terms of privacy (first objective) and data quality (second objective). However, due to the exponential growth of the size of the search space, previous implementations of this method are not well-suited when datasets contain many attributes which need to be protected. As this is often the case with biomedical research data, e.g. as a result of longitudinal collection, we have developed a novel method. Our approach combines the mathematical concept of antichains with a data structure inspired by prefix trees to represent properties of a large number of data transformations while requiring only a minimal amount of information to be stored. To analyze the improvements which can be achieved by adopting our method, we have integrated it into an existing algorithm and we have also implemented a simple best-first branch and bound search (BFS) algorithm as a first step towards methods which fully exploit our approach. We have evaluated these implementations with several real-world datasets and the k-anonymity privacy model. When integrated into existing de-identification algorithms for low-dimensional data, our approach reduced memory requirements by up to one order of magnitude and execution times by up to 25 %. This allowed us to increase the size of solution spaces which could be processed by almost a factor of 10. When using the simple BFS method, we were able to further increase the size of the solution space by a factor of three. When used as a heuristic strategy for high-dimensional data, the BFS approach outperformed a state-of-the-art algorithm by up to 12 % in terms of the quality of output data. This work shows that implementing methods of data de-identification for real-world applications is a challenging task. Our approach solves a problem often faced by data custodians: a lack of scalability of de-identification software when used with datasets having realistic schemas and volumes. The method described in this article has been implemented into ARX, an open source de-identification software for biomedical data.
The Clinician Perspective on Sex Differences in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jamison, Rene; Bishop, Somer L.; Huerta, Marisela; Halladay, Alycia K.
2017-01-01
Research studies using existing samples of individuals with autism spectrum disorders have identified differences in symptoms between males and females. Differences are typically reported in school age and adolescence, with similarities in symptom presentation at earlier ages. However, existing studies on sex differences are significantly limited,…
SPATIALLY-BALANCED SURVEY DESIGN FOR GROUNDWATER USING EXISTING WELLS
Many states have a monitoring program to evaluate the water quality of groundwater across the state. These programs rely on existing wells for access to the groundwater, due to the high cost of drilling new wells. Typically, a state maintains a database of all well locations, in...
Techno-ecological synergy as a path toward sustainability of a North American residential system.
Urban, Robert A; Bakshi, Bhavik R
2013-02-19
For any human-designed system to be sustainable, ecosystem services that support it must be readily available. This work explicitly accounts for this dependence by designing synergies between technological and ecological systems. The resulting techno-ecological network mimics nature at the systems level, can stay within ecological constraints, and can identify novel designs that are economically and environmentally attractive that may not be found by the traditional design focus on technological options. This approach is showcased by designing synergies for a typical American suburban home at local and life cycle scales. The objectives considered are carbon emissions, water withdrawal, and cost savings. Systems included in the design optimization include typical ecosystems in suburban yards: lawn, trees, water reservoirs, and a vegetable garden; technological systems: heating, air conditioning, faucets, solar panels, etc.; and behavioral variables: heating and cooling set points. The ecological and behavioral design variables are found to have a significant effect on the three objectives, in some cases rivaling and exceeding the effect of traditional technological options. These results indicate the importance and benefits of explicitly including ecosystems in the design of sustainable systems, something that is rarely done in existing methods.
Fatigue of restorative materials.
Baran, G; Boberick, K; McCool, J
2001-01-01
Failure due to fatigue manifests itself in dental prostheses and restorations as wear, fractured margins, delaminated coatings, and bulk fracture. Mechanisms responsible for fatigue-induced failure depend on material ductility: Brittle materials are susceptible to catastrophic failure, while ductile materials utilize their plasticity to reduce stress concentrations at the crack tip. Because of the expense associated with the replacement of failed restorations, there is a strong desire on the part of basic scientists and clinicians to evaluate the resistance of materials to fatigue in laboratory tests. Test variables include fatigue-loading mode and test environment, such as soaking in water. The outcome variable is typically fracture strength, and these data typically fit the Weibull distribution. Analysis of fatigue data permits predictive inferences to be made concerning the survival of structures fabricated from restorative materials under specified loading conditions. Although many dental-restorative materials are routinely evaluated, only limited use has been made of fatigue data collected in vitro: Wear of materials and the survival of porcelain restorations has been modeled by both fracture mechanics and probabilistic approaches. A need still exists for a clinical failure database and for the development of valid test methods for the evaluation of composite materials.
Orsolini, Laura; Francesconi, Giulia; Papanti, Duccio; Giorgetti, Arianna; Schifano, Fabrizio
2015-07-01
Internet and social networking sites play a significant role in the marketing and distribution of recreational/prescription drugs without restrictions. We aimed here at reviewing data relating to the profile of the online drug customer and at describing drug vending websites. The PubMed, Google Scholar, and Scopus databases were searched here in order to elicit data on the socio-demographic characteristics of the recreational marketplaces/online pharmacies' customers and the determinants relating to online drug purchasing activities. Typical online recreational drugs' customers seem to be Caucasian, men, in their 20s, highly educated, and using the web to impact as minimally as possible on their existing work/professional status. Conversely, people without any health insurance seemed to look at the web as a source of more affordable prescription medicines. Drug vending websites are typically presented here with a "no prescription required" approach, together with aggressive marketing strategies. The online availability of recreational/prescriptions drugs remains a public health concern. A more precise understanding of online vending sites' customers may well facilitate the drafting and implementation of proper prevention campaigns aimed at counteracting the increasing levels of online drug acquisition and hence intake activities. Copyright © 2015 John Wiley & Sons, Ltd.
Well test mathematical model for fractures network in tight oil reservoirs
NASA Astrophysics Data System (ADS)
Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming
2018-02-01
Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.
Narkhede, Rajvilas Anil; Bada, Vijaykumar C; Kona, Lakshmi Kumari
2017-02-01
Gallstone ileus is a diagnosis of rarity, and a proximal site of obstruction in a young patient is even rare. Of the three cases in our experience, we found two cases of gallstone ileus (GSI) with typical epidemiology and presentation, one had combination of multiple rare associations. We report such a case, suspected to have gallstone ileus on ultrasound and confirmed diagnosis on computed tomography. Presence of biliary-enteric fistula, old age, and obstructive features, as in typical cases, was a bigger asset for diagnosis, but it was difficult to entertain diagnosis of GSI in young girl in absence of a demonstrable biliary-enteric fistula, with uncommon association of choledochal cyst and sickle cell disease. A very surprising finding, dilated major papilla, could however explain the pathogenesis which has also been reported in the past. Although differential opinions regarding management exist, we decided to follow two-stage surgery as our institute protocol. A minimal access approach has been immensely helpful in accurate diagnosis, and expedative management with early recovery has been proven in the past studies which we agreed with our experience.
The Diabetic Foot Attack: "'Tis Too Late to Retreat!"
Vas, Prashanth R J; Edmonds, Michael; Kavarthapu, Venu; Rashid, Hisham; Ahluwalia, Raju; Pankhurst, Christian; Papanas, Nikolaos
2018-03-01
The "diabetic foot attack" is one of the most devastating presentations of diabetic foot disease, typically presenting as an acutely inflamed foot with rapidly progressive skin and tissue necrosis, at times associated with significant systemic symptoms. Without intervention, it may escalate over hours to limb-threatening proportions and poses a high amputation risk. There are only best practice approaches but no international protocols to guide management. Immediate recognition of a typical infected diabetic foot attack, predominated by severe infection, with prompt surgical intervention to debride all infected tissue alongside broad-spectrum antibiotic therapy is vital to ensure both limb and patient survival. Postoperative access to multidisciplinary and advanced wound care therapies is also necessary. More subtle forms exist: these include the ischemic diabetic foot attack and, possibly, in a contemporary categorization, acute Charcot neuroarthropathy. To emphasize the importance of timely action especially in the infected and ischemic diabetic foot attack, we revisit the concept of "time is tissue" and draw parallels with advances in acute myocardial infarction and stroke care. At the moment, international protocols to guide management of severe diabetic foot presentations do not specifically use the term. However, we believe that it may help increase awareness of the urgent actions required in some situations.
Perceptual interaction of local motion signals
Nitzany, Eyal I.; Loe, Maren E.; Palmer, Stephanie E.; Victor, Jonathan D.
2016-01-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them. PMID:27902829
Perceptual interaction of local motion signals.
Nitzany, Eyal I; Loe, Maren E; Palmer, Stephanie E; Victor, Jonathan D
2016-11-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them.
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Morton, A.
2017-10-01
Identifying erratic or unstable time-series is an area of interest to many fields. Recently, there have been successful developments towards this goal. These new developed methodologies however come from domains where it is typical to have several thousand or more temporal observations. This creates a challenge when attempting to apply these methodologies to time-series with much fewer temporal observations such as for socio-cultural understanding, a domain where a typical time series of interest might only consist of 20-30 annual observations. Most existing methodologies simply cannot say anything interesting with so few data points, yet researchers are still tasked to work within in the confines of the data. Recently a method for characterizing instability in a time series with limitedtemporal observations was published. This method, Attribute Stability Index (ASI), uses an approximate entropy based method tocharacterize a time series' instability. In this paper we propose an explicitly spatially weighted extension of the Attribute StabilityIndex. By including a mechanism to account for spatial autocorrelation, this work represents a novel approach for the characterizationof space-time instability. As a case study we explore national youth male unemployment across the world from 1991-2014.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Evaluating interventions in health: a reconciliatory approach.
Wolff, Jonathan; Edwards, Sarah; Richmond, Sarah; Orr, Shepley; Rees, Geraint
2012-11-01
Health-related Quality of Life measures have recently been attacked from two directions, both of which criticize the preference-based method of evaluating health states they typically incorporate. One attack, based on work by Daniel Kahneman and others, argues that 'experience' is a better basis for evaluation. The other, inspired by Amartya Sen, argues that 'capability' should be the guiding concept. In addition, opinion differs as to whether health evaluation measures are best derived from consultations with the general public, with patients, or with health professionals. And there is disagreement about whether these opinions should be solicited individually and aggregated, or derived instead from a process of collective deliberation. These distinctions yield a wide variety of possible approaches, with potentially differing policy implications. We consider some areas of disagreement between some of these approaches. We show that many of the perspectives seem to capture something important, such that it may be a mistake to reject any of them. Instead we suggest that some of the existing 'instruments' designed to measure HR QoLs may in fact successfully already combine these attributes, and with further refinement such instruments may be able to provide a reasonable reconciliation between the perspectives. © 2011 Blackwell Publishing Ltd.
A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management
Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng
2013-01-01
In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products. Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance. PMID:23645112
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
Nanotechnology regulation: a study in claims making.
Malloy, Timothy F
2011-01-25
There appears to be consensus on the notion that the hazards of nanotechnology are a social problem in need of resolution, but much dispute remains over what that resolution should be. There are a variety of potential policy tools for tackling this challenge, including conventional direct regulation, self-regulation, tort liability, financial guarantees, and more. The literature in this area is replete with proposals embracing one or more of these tools, typically using conventional regulation as a foil in which its inadequacy is presented as justification for a new proposed approach. At its core, the existing literature raises a critical question: What is the most effective role of government as regulator in these circumstances? This article explores that question by focusing upon two policy approaches in particular: conventional regulation and self-regulation, often described as hard law and soft law, respectively. Drawing from the sociology of social problems, the article examines the soft law construction of the nanotechnology problem and the associated solutions, with emphasis on the claims-making strategies used. In particular, it critically examines the rhetoric and underlying grounds for the soft law approach. It also sets out the grounds and framework for an alternative construction and solution-the concept of iterative regulation.
Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping
2017-03-19
The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.
Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping
2017-01-01
The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity. PMID:28335492
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
Cockpit Technology for Prevention of General Aviation Runway Incursions
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Jones, Denise R.
2007-01-01
General aviation accounted for 74 percent of runway incursions but only 57 percent of the operations during the four-year period from fiscal year (FY) 2001 through FY2004. Elements of the NASA Runway Incursion Prevention System were adapted and tested for general aviation aircraft. Sixteen General Aviation pilots, of varying levels of certification and amount of experience, participated in a piloted simulation study to evaluate the system for prevention of general aviation runway incursions compared to existing moving map displays. Pilots flew numerous complex, high workload approaches under varying weather and visibility conditions. A rare-event runway incursion scenario was presented, unbeknownst to the pilots, which represented a typical runway incursion situation. The results validated the efficacy and safety need for a runway incursion prevention system for general aviation aircraft.
Obesity Prevention for Children with Developmental Disabilities
Curtin, Carol; Hubbard, Kristie; Sikich, Linmarie; Bedford, James; Bandini, Linda
2014-01-01
The prevention of obesity in children with DD is a pressing public health issue, with implications for health status, independent living, and quality of life. Substantial evidence suggests that children with developmental disabilities (DD), including those with intellectual disabilities (ID) and autism spectrum disorder (ASD), have a prevalence of obesity at least as high if not higher than their typically developing peers. The paper reviews what is known about the classic and unique risk factors for childhood obesity in these groups of children, including dietary, physical activity, sedentary behavior, and family factors, as well as medication use. We use evidence from the literature to make the case that primary prevention at the individual/family, school and community levels will require tailoring of strategies and adapting existing intervention approaches. PMID:25530916
Towards a Methodology for Identifying Program Constraints During Requirements Analysis
NASA Technical Reports Server (NTRS)
Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo
1997-01-01
Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.
An algorithm for encryption of secret images into meaningful images
NASA Astrophysics Data System (ADS)
Kanso, A.; Ghebleh, M.
2017-03-01
Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.
APPLIED ORIGAMI. Origami of thick panels.
Chen, Yan; Peng, Rui; You, Zhong
2015-07-24
Origami patterns, including the rigid origami patterns in which flat inflexible sheets are joined by creases, are primarily created for zero-thickness sheets. In order to apply them to fold structures such as roofs, solar panels, and space mirrors, for which thickness cannot be disregarded, various methods have been suggested. However, they generally involve adding materials to or offsetting panels away from the idealized sheet without altering the kinematic model used to simulate folding. We develop a comprehensive kinematic synthesis for rigid origami of thick panels that differs from the existing kinematic model but is capable of reproducing motions identical to that of zero-thickness origami. The approach, proven to be effective for typical origami, can be readily applied to fold real engineering structures. Copyright © 2015, American Association for the Advancement of Science.
Adding Concrete Syntax to a Prolog-Based Program Synthesis System
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Visser, Eelco
2003-01-01
Program generation and transformation systems manipulate large, pa- rameterized object language fragments. Support for user-definable concrete syntax makes this easier but is typically restricted to certain object and meta languages. We show how Prolog can be retrofitted with concrete syntax and describe how a seamless interaction of concrete syntax fragments with an existing legacy meta-programming system based on abstract syntax is achieved. We apply the approach to gradually migrate the schemas of the AUTOBAYES program synthesis system to concrete syntax. Fit experiences show that this can result in a considerable reduction of the code size and an improved readability of the code. In particular, abstracting out fresh-variable generation and second-order term construction allows the formulation of larger continuous fragments and improves the locality in the schemas.
Kamala, K A; Sankethguddad, S; Sujith, S G; Tantradi, Praveena
2016-01-01
Burning mouth syndrome (BMS) is multifactorial in origin which is typically characterized by burning and painful sensation in an oral cavity demonstrating clinically normal mucosa. Although the cause of BMS is not known, a complex association of biological and psychological factors has been identified, suggesting the existence of a multifactorial etiology. As the symptom of oral burning is seen in various pathological conditions, it is essential for a clinician to be aware of how to differentiate between symptom of oral burning and BMS. An interdisciplinary and systematic approach is required for better patient management. The purpose of this study was to provide the practitioner with an understanding of the local, systemic, and psychosocial factors which may be responsible for oral burning associated with BMS, and review of treatment modalities, therefore providing a foundation for diagnosis and treatment of BMS.
FAST: A multi-processed environment for visualization of computational fluid dynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin
1991-01-01
Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.
Modeling for Integrated Science Management and Resilient Systems Development
NASA Technical Reports Server (NTRS)
Shelhamer, M.; Mindock, J.; Lumpkins, S.
2014-01-01
Many physiological, environmental, and operational risks exist for crewmembers during spaceflight. An understanding of these risks from an integrated perspective is required to provide effective and efficient mitigations during future exploration missions that typically have stringent limitations on resources available, such as mass, power, and crew time. The Human Research Program (HRP) is in the early stages of developing collaborative modeling approaches for the purposes of managing its science portfolio in an integrated manner to support cross-disciplinary risk mitigation strategies and to enable resilient human and engineered systems in the spaceflight environment. In this talk, we will share ideas being explored from fields such as network science, complexity theory, and system-of-systems modeling. Initial work on tools to support these explorations will be discussed briefly, along with ideas for future efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1981-01-01
One of the most important responsibilities of any company's management is to establish and maintain, via periodic evaluations, is adequate internal accounting controls. Arthur Andersen and Co. approaches this problem as it relates to the oil and gas industry by logically dividing the economic events that affect a given company into four groups of activity termed ''business cycles'': treasury, expenditure, conversion (exploration, development, and production), and revenue activities. Independent public accountants can evaluate a company's existing internal controls much more thoroughly by studying only one category of business transactions at a time. Arthur Andersen's guide to reviewing internal controls coversmore » each step of this ''transaction-flow'' method as applied to a typical oil and gas company.« less
Schizophrenia and the corpus callosum: developmental, structural and functional relationships.
David, A S
1994-10-20
Several empirical and theoretical connections exist between schizophrenia and the corpus callosum: (1) disconnection symptoms resemble certain psychotic phenomena; (2) abnormal interhemispheric transmission could explain typically schizophrenic phenomena; (3) cases of psychosis have been found in association with complete and partial agenesis of the callosum; (4) experimental neuropsychology with schizophrenic patients has revealed abnormal patterns of interhemispheric transfer; (5) studies using magnetic resonance imaging have shown abnormal callosal dimensions in schizophrenic patients. The evidence in support of these links is discussed critically. Novel neuropsychological approaches in the study of information transfer in the visual modality between the cerebral hemispheres, consistent with callosal hyperconnectivity in schizophrenic patients but not matched psychiatric controls are highlighted. Some suggestions for further work including integrating functional and structural measures are offered.
Therapeutic approaches for survivors of disaster.
Austin, L S; Godleski, L S
1999-12-01
Common psychiatric responses to disasters include depression, PTSD, generalized anxiety disorder, substance-abuse disorder, and somatization disorder. These symptom complexes may arise because of the various types of trauma experienced, including terror or horror, bereavement, and disruption of lifestyle. Because different types of disaster produce different patterns of trauma, clinical response should address the special characteristics of those affected. Traumatized individuals are typically resistant to seeking treatment, so treatment must be taken to the survivors, at locations within their communities. Most helpful is to train and support mental health workers from the affected communities. Interventions in groups have been found to be effective to promote catharsis, support, and a sense of identification with the group. Special groups to be considered include children, injured victims, people with pre-existing psychiatric histories, and relief workers.
Memory for sequences of events impaired in typical aging.
Allen, Timothy A; Morris, Andrea M; Stark, Shauna M; Fortin, Norbert J; Stark, Craig E L
2015-03-01
Typical aging is associated with diminished episodic memory performance. To improve our understanding of the fundamental mechanisms underlying this age-related memory deficit, we previously developed an integrated, cross-species approach to link converging evidence from human and animal research. This novel approach focuses on the ability to remember sequences of events, an important feature of episodic memory. Unlike existing paradigms, this task is nonspatial, nonverbal, and can be used to isolate different cognitive processes that may be differentially affected in aging. Here, we used this task to make a comprehensive comparison of sequence memory performance between younger (18-22 yr) and older adults (62-86 yr). Specifically, participants viewed repeated sequences of six colored, fractal images and indicated whether each item was presented "in sequence" or "out of sequence." Several out of sequence probe trials were used to provide a detailed assessment of sequence memory, including: (i) repeating an item from earlier in the sequence ("Repeats"; e.g., AB A: DEF), (ii) skipping ahead in the sequence ("Skips"; e.g., AB D: DEF), and (iii) inserting an item from a different sequence into the same ordinal position ("Ordinal Transfers"; e.g., AB 3: DEF). We found that older adults performed as well as younger controls when tested on well-known and predictable sequences, but were severely impaired when tested using novel sequences. Importantly, overall sequence memory performance in older adults steadily declined with age, a decline not detected with other measures (RAVLT or BPS-O). We further characterized this deficit by showing that performance of older adults was severely impaired on specific probe trials that required detailed knowledge of the sequence (Skips and Ordinal Transfers), and was associated with a shift in their underlying mnemonic representation of the sequences. Collectively, these findings provide unambiguous evidence that the capacity to remember sequences of events is fundamentally affected by typical aging. © 2015 Allen et al.; Published by Cold Spring Harbor Laboratory Press.
Memory for sequences of events impaired in typical aging
Allen, Timothy A.; Morris, Andrea M.; Stark, Shauna M.; Fortin, Norbert J.
2015-01-01
Typical aging is associated with diminished episodic memory performance. To improve our understanding of the fundamental mechanisms underlying this age-related memory deficit, we previously developed an integrated, cross-species approach to link converging evidence from human and animal research. This novel approach focuses on the ability to remember sequences of events, an important feature of episodic memory. Unlike existing paradigms, this task is nonspatial, nonverbal, and can be used to isolate different cognitive processes that may be differentially affected in aging. Here, we used this task to make a comprehensive comparison of sequence memory performance between younger (18–22 yr) and older adults (62–86 yr). Specifically, participants viewed repeated sequences of six colored, fractal images and indicated whether each item was presented “in sequence” or “out of sequence.” Several out of sequence probe trials were used to provide a detailed assessment of sequence memory, including: (i) repeating an item from earlier in the sequence (“Repeats”; e.g., ABADEF), (ii) skipping ahead in the sequence (“Skips”; e.g., ABDDEF), and (iii) inserting an item from a different sequence into the same ordinal position (“Ordinal Transfers”; e.g., AB3DEF). We found that older adults performed as well as younger controls when tested on well-known and predictable sequences, but were severely impaired when tested using novel sequences. Importantly, overall sequence memory performance in older adults steadily declined with age, a decline not detected with other measures (RAVLT or BPS-O). We further characterized this deficit by showing that performance of older adults was severely impaired on specific probe trials that required detailed knowledge of the sequence (Skips and Ordinal Transfers), and was associated with a shift in their underlying mnemonic representation of the sequences. Collectively, these findings provide unambiguous evidence that the capacity to remember sequences of events is fundamentally affected by typical aging. PMID:25691514
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Parallel fabrication of macroporous scaffolds.
Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal
2018-07-01
Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation
NASA Technical Reports Server (NTRS)
Tang, Adrian J.
2013-01-01
Aerial refueling technology for both manned and unmanned aircraft is critical for operations where extended aircraft flight time is required. Existing refueling assets are typically manned aircraft, which couple to a second aircraft through the use of a refueling boom. Alignment and mating of the two aircraft continues to rely on human control with use of high-resolution cameras. With the recent advances in unmanned aircraft, it would be highly advantageous to remove/reduce human control from the refueling process, simplifying the amount of remote mission management and enabling new operational scenarios. Existing aerial refueling uses a camera, making it non-autonomous and prone to human error. Existing commercial localizer technology has proven robust and reliable, but not suited for aircraft-to-aircraft approaches like in aerial refueling scenarios since the resolution is too coarse (approximately one meter). A localizer approach system for aircraft-to-aircraft docking can be constructed using the same modulation with a millimeterwave carrier to provide high resolution. One technology used to remotely align commercial aircraft on approach to a runway are ILS (instrument landing systems). ILS have been in service within the U.S. for almost 50 years. In a commercial ILS, two partially overlapping beams of UHF (109 to 126 MHz) are broadcast from an antenna array so that their overlapping region defines the centerline of the runway. This is called a localizer system and is responsible for horizontal alignment of the approach. One beam is modulated with a 150-Hz tone, while the other with a 90-Hz tone. Through comparison of the modulation depths of both tones, an autopilot system aligns the approaching aircraft with the runway centerline. A similar system called a glide-slope (GS) exists in the 320-to-330MHz band for vertical alignment of the approach. While this technology has been proven reliable for millions of commercial flights annually, its UHF nature limits its ability to operate beyond the 1-to-2-meter precisions associated with commercial runway width. A prototype ILS-type system operates at millimeter-wave frequencies to provide automatic and robust approach control for aerial refueling. The system allows for the coupling process to remain completely autonomous, as a boom operator is no longer required. Operating beyond 100 GHz provides enough resolution and a narrow enough beamwidth that an approach corridor of centimeter scales can be maintained. Two modules were used to accomplish this task. The first module is a localizer/glide-slope module that can be fitted on a refueling aircraft. This module provides the navigation beams for aligning the approaching aircraft. The second module is navigational receiver fitted onto the approaching aircraft to be re fueled that can detect the approach beams. Since unmanned aircraft have a limited payload size and limited electrical power, the receiver portion was implemented in CMOS (complementary metal oxide semiconductor) technology based on a super-regenerative receiver (SRR) architecture. The SRR achieves mW-level power consumption and chip sizes less than l mm2. While super-regenerative techniques have small bandwidths that limit use in communication systems, their advantages of high sensitivity, low complexity, and low power make them ideal in this situation where modulating tones of less than 1 kHz are used.
Temporary brittle bone disease: fractures in medical care.
Paterson, Colin R
2009-12-01
Temporary brittle bone disease is the name given to a syndrome first reported in 1990, in which fractures occur in infants in the first year of life. The fractures include rib fractures and metaphyseal fractures which are mostly asymptomatic. The radiological features of this disorder mimic those often ascribed to typical non-accidental injury. The subject has been controversial, some authors suggesting that the disorder does not exist. This study reports five infants with typical features of temporary brittle bone disease in whom all or most of the fractures took place while in hospital. A non-accidental cause can be eliminated with some confidence, and these cases provide evidence in support of the existence of temporary brittle bone disease.
ERIC Educational Resources Information Center
Volpe, Robert J.; Gadow, Kenneth D.
2010-01-01
Rating scales developed to measure child emotional and behavioral problems typically are so long as to make their use in progress monitoring impractical in typical school settings. This study examined two methods of selecting items from existing rating scales to create shorter instruments for use in assessing response to intervention. The…
Potential land use adjustment for future climate change adaptation in revegetated regions.
Peng, Shouzhang; Li, Zhi
2018-05-22
To adapt to future climate change, appropriate land use patterns are desired. Potential natural vegetation (PNV) emphasizing the dominant role of climate can provide a useful baseline to guide the potential land use adjustment. This work is particularly important for the revegetated regions with intensive human perturbation. However, it has received little attention. This study chose China's Loess Plateau, a typical revegetated region, as an example study area to generate the PNV patterns with high spatial resolution over 2071-2100 with a process-based dynamic vegetation model (LPJ-GUESS), and further investigated the potential land use adjustment through comparing the simulated and observed land use patterns. Compared with 1981-2010, the projected PNV over 2071-2100 would have less forest and more steppe because of drier climate. Subsequently, 25.3-55.0% of the observed forests and 79.3-91.9% of the observed grasslands in 2010 can be kept over 2071-2100, and the rest of the existing forested area and grassland were expected to be more suitable for steppes and forests, respectively. To meet the request of China's Grain for Green Project, 60.9-84.8% of the existing steep farmland could be converted to grassland and the other for forest. Our results highlight the importance in adjusting the existing vegetation pattern to adapt to climate change. The research approach is extendable and provides a framework to evaluate the sustainability of the existing land use pattern under future climate. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
??National programs such as Home Performance with ENERGY STAR(R) and numerous other utility air sealing programs have brought awareness to homeowners of the benefits of energy efficiency retrofits. Yet, these programs tend to focus on the low-hanging fruit: air-sealing the thermal envelope and ductwork where accessible, switch to efficient lighting, and low-flow fixtures. At the other end of the spectrum, deep-energy retrofit programs are also being encouraged by various utilities across the country. While deep energy retrofits typically seek 50% energy savings, they are often quite costly and most applicable to gut-rehab projects. A significant potential for lowering energy usagemore » in existing homes lies between the low hanging fruit and deep energy retrofit approaches - retrofits that save approximately 30% in energy over the existing conditions. A key is to be non-intrusive with the efficiency measures so the retrofit projects can be accomplished in occupied homes. This cold climate retrofit project involved the design and optimization of a home in Connecticut that sought to improve energy savings by at least 30% (excluding solar PV) over the existing home's performance. This report documents the successful implementation of a cost-effective solution package that achieved performance greater than 30% over the pre-retrofit - what worked, what did not, and what improvements could be made. Confirmation of successfully achieving 30% source energy savings over the pre-existing conditions was confirmed through energy modeling and comparison of the utility bills pre- and post- retrofit.« less
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.
1994-01-01
A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.
Zheng, Jie; Gaunt, Tom R; Day, Ian N M
2013-01-01
Genome-Wide Association Studies (GWAS) frequently incorporate meta-analysis within their framework. However, conditional analysis of individual-level data, which is an established approach for fine mapping of causal sites, is often precluded where only group-level summary data are available for analysis. Here, we present a numerical and graphical approach, "sequential sentinel SNP regional association plot" (SSS-RAP), which estimates regression coefficients (beta) with their standard errors using the meta-analysis summary results directly. Under an additive model, typical for genes with small effect, the effect for a sentinel SNP can be transformed to the predicted effect for a possibly dependent SNP through a 2×2 2-SNP haplotypes table. The approach assumes Hardy-Weinberg equilibrium for test SNPs. SSS-RAP is available as a Web-tool (http://apps.biocompute.org.uk/sssrap/sssrap.cgi). To develop and illustrate SSS-RAP we analyzed lipid and ECG traits data from the British Women's Heart and Health Study (BWHHS), evaluated a meta-analysis for ECG trait and presented several simulations. We compared results with existing approaches such as model selection methods and conditional analysis. Generally findings were consistent. SSS-RAP represents a tool for testing independence of SNP association signals using meta-analysis data, and is also a convenient approach based on biological principles for fine mapping in group level summary data. © 2012 Blackwell Publishing Ltd/University College London.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
An historical framework for psychiatric nosology
Kendler, K. S.
2009-01-01
This essay, which seeks to provide an historical framework for our efforts to develop a scientific psychiatric nosology, begins by reviewing the classificatory approaches that arose in the early history of biological taxonomy. Initial attempts at species definition used top-down approaches advocated by experts and based on a few essential features of the organism chosen a priori. This approach was subsequently rejected on both conceptual and practical grounds and replaced by bottom-up approaches making use of a much wider array of features. Multiple parallels exist between the beginnings of biological taxonomy and psychiatric nosology. Like biological taxonomy, psychiatric nosology largely began with ‘expert’ classifications, typically influenced by a few essential features, articulated by one or more great 19th-century diagnosticians. Like biology, psychiatry is struggling toward more soundly based bottom-up approaches using diverse illness characteristics. The underemphasized historically contingent nature of our current psychiatric classification is illustrated by recounting the history of how ‘Schneiderian’ symptoms of schizophrenia entered into DSM-III. Given these historical contingencies, it is vital that our psychiatric nosologic enterprise be cumulative. This can be best achieved through a process of epistemic iteration. If we can develop a stable consensus in our theoretical orientation toward psychiatric illness, we can apply this approach, which has one crucial virtue. Regardless of the starting point, if each iteration (or revision) improves the performance of the nosology, the eventual success of the nosologic process, to optimally reflect the complex reality of psychiatric illness, is assured. PMID:19368761
A Modest Proposal for Improving the Education of Reading Teachers. Technical Report No. 487.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A gap exists between talk about teaching that is featured in most preservice teacher education and the working knowledge and problem-solving expertise that characterize skilled teaching. This gap exists because typical teacher training does not embody the principles of modeling, coaching, scaffolding, articulation, and reflection. Three methods…
Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro
2017-03-29
Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.
Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro
2017-01-01
Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error. PMID:28353676
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MOSAIC--A Modular Approach to Data Management in Epidemiological Studies.
Bialke, M; Bahls, T; Havemann, C; Piegsa, J; Weitmann, K; Wegner, T; Hoffmann, W
2015-01-01
In the context of an increasing number of multi-centric studies providing data from different sites and sources the necessity for central data management (CDM) becomes undeniable. This is exacerbated by a multiplicity of featured data types, formats and interfaces. In relation to methodological medical research the definition of central data management needs to be broadened beyond the simple storage and archiving of research data. This paper highlights typical requirements of CDM for cohort studies and registries and illustrates how orientation for CDM can be provided by addressing selected data management challenges. Therefore in the first part of this paper a short review summarises technical, organisational and legal challenges for CDM in cohort studies and registries. A deduced set of typical requirements of CDM in epidemiological research follows. In the second part the MOSAIC project is introduced (a modular systematic approach to implement CDM). The modular nature of MOSAIC contributes to manage both technical and organisational challenges efficiently by providing practical tools. A short presentation of a first set of tools, aiming for selected CDM requirements in cohort studies and registries, comprises a template for comprehensive documentation of data protection measures, an interactive reference portal for gaining insights and sharing experiences, supplemented by modular software tools for generation and management of generic pseudonyms, for participant management and for sophisticated consent management. Altogether, work within MOSAIC addresses existing challenges in epidemiological research in the context of CDM and facilitates the standardized collection of data with pre-programmed modules and provided document templates. The necessary effort for in-house programming is reduced, which accelerates the start of data collection.
The circuit architecture of whole brains at the mesoscopic scale.
Mitra, Partha P
2014-09-17
Vertebrate brains of even moderate size are composed of astronomically large numbers of neurons and show a great degree of individual variability at the microscopic scale. This variation is presumably the result of phenotypic plasticity and individual experience. At a larger scale, however, relatively stable species-typical spatial patterns are observed in neuronal architecture, e.g., the spatial distributions of somata and axonal projection patterns, probably the result of a genetically encoded developmental program. The mesoscopic scale of analysis of brain architecture is the transitional point between a microscopic scale where individual variation is prominent and the macroscopic level where a stable, species-typical neural architecture is observed. The empirical existence of this scale, implicit in neuroanatomical atlases, combined with advances in computational resources, makes studying the circuit architecture of entire brains a practical task. A methodology has previously been proposed that employs a shotgun-like grid-based approach to systematically cover entire brain volumes with injections of neuronal tracers. This methodology is being employed to obtain mesoscale circuit maps in mouse and should be applicable to other vertebrate taxa. The resulting large data sets raise issues of data representation, analysis, and interpretation, which must be resolved. Even for data representation the challenges are nontrivial: the conventional approach using regional connectivity matrices fails to capture the collateral branching patterns of projection neurons. Future success of this promising research enterprise depends on the integration of previous neuroanatomical knowledge, partly through the development of suitable computational tools that encapsulate such expertise. Copyright © 2014 Elsevier Inc. All rights reserved.
Ho, Hung Chak; Knudby, Anders; Xu, Yongming; Hodul, Matus; Aminipouri, Mehdi
2016-02-15
Apparent temperature is more closely related to mortality during extreme heat events than other temperature variables, yet spatial epidemiology studies typically use skin temperature (also known as land surface temperature) to quantify heat exposure because it is relatively easy to map from satellite data. An empirical approach to map apparent temperature at the neighborhood scale, which relies on publicly available weather station observations and spatial data layers combined in a random forest regression model, was demonstrated for greater Vancouver, Canada. Model errors were acceptable (cross-validated RMSE=2.04 °C) and the resulting map of apparent temperature, calibrated for a typical hot summer day, corresponded well with past temperature research in the area. A comparison with field measurements as well as similar maps of skin temperature and air temperature revealed that skin temperature was poorly correlated with both air temperature (R(2)=0.38) and apparent temperature (R(2)=0.39). While the latter two were more similar (R(2)=0.87), apparent temperature was predicted to exceed air temperature by more than 5 °C in several urban areas as well as around the confluence of the Pitt and Fraser rivers. We conclude that skin temperature is not a suitable proxy for human heat exposure, and that spatial epidemiology studies could benefit from mapping apparent temperature, using an approach similar to the one reported here, to better quantify differences in heat exposure that exist across an urban landscape. Copyright © 2015 Elsevier B.V. All rights reserved.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Why using genetics to address welfare may not be a good idea.
Thompson, P B
2010-04-01
Welfare of animals in livestock production systems is now widely defined in terms of 3 classes of measures: veterinary health, mental well-being (or feelings), and natural behaviors. Several well-documented points of tension exist among welfare indicators in these 3 classes. Strategies that aim to improve welfare using genetics can increase resistance to disease and may also be able to relieve stress or injury. One strategy is to reduce the genetic proclivity of the bird to engage in behaviors that are frustrated in modern production systems. Another is to develop strains less prone to behaviors hurtful to other hens. Yet another is to make overall temperament a goal for genetic adjustments. These genetic approaches may score well in terms of veterinary and psychological well-being. Yet they also involve changes in behavioral repertoire and tendencies of the resulting bird. Although it has seemed reasonable to argue that such animals are better off than frustrated or injured animals reflecting more species-typical behaviors, there is a point of view that holds that modification of a species-typical trait is ipso facto a decline in the well-being of the animal. Additionally, a significant amount of anecdotal evidence has been accumulated that suggests that many animal advocates and members of the public find manipulation of genetics to be an ethically unacceptable approach to animal welfare, especially when modifications in the environment could also be a response to welfare problems. Hence, though promising from one perspective, genetic strategies to improve welfare may not be acceptable to the public.
Decision aids for multiple-decision disease management as affected by weather input errors.
Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D
2011-06-01
Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.
Evaluative Conditioning Can Be Modulated by Memory of the CS-US Pairings at the Time of Testing
ERIC Educational Resources Information Center
Gast, Anne; De Houwer, Jan; De Schryver, Maarten
2012-01-01
Evaluative conditioning (EC) is the valence change of a (typically neutral) stimulus (CS) that is due to the previous pairing with another (typically valent) stimulus (US). It has been repeatedly shown that EC effects are stronger or existent only if participants know which US was paired with which CS. Knowledge of the CS-US pairings is usually…
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Sub-Grid Modeling of Electrokinetic Effects in Micro Flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
2005-01-01
Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this approach, the numeric grid size can be much larger than the thickness of double layer. Presented in this report are a description of the approach, methodology for implementation and several validation simulations for micro flows.
Degradation of organic pollutants by Vacuum-Ultraviolet (VUV): Kinetic model and efficiency.
Xie, Pengchao; Yue, Siyang; Ding, Jiaqi; Wan, Ying; Li, Xuchun; Ma, Jun; Wang, Zongping
2018-04-15
Vacuum-Ultraviolet (VUV), an efficient and green method to produce hydroxyl radical (•OH), is effective in degrading numerous organic contaminants in aqueous solution. Here, we proposed an effective and simple kinetic model to describe the degradation of organic pollutants in VUV system, by taking the •OH scavenging effects of formed organic intermediates as co-existing organic matter in whole. Using benzoic acid (BA) as a •OH probe, •OH was regarded vital for pollutant degradation in VUV system, and the thus developed model successfully predicted its degradation kinetics under different conditions. Effects of typical influencing factors such as BA concentrations and UV intensity were investigated quantitatively by the model. Temperature was found to be an important influencing factor in the VUV system, and the quantum yield of •OH showed a positive linear dependence on temperature. Impacts of humic acid (HA), alkalinity, chloride, and water matrices (realistic waters) on the oxidation efficiency were also examined. BA degradation was significantly inhibited by HA due to its scavenging of •OH, but was influenced much less by the alkalinity and chloride; high oxidation efficiency was still obtained in the realistic water. The degradation kinetics of three other typical micropollutants including bisphenol A (BPA), nitrobenzene (NB) and dimethyl phthalate (DMP), and the mixture of co-existing BA, BPA and DMP were further studied, and the developed model predicted the experimental data well, especially in realistic water. It is expected that this study will provide an effective approach to predict the degradation of organic micropollutants by the promising VUV system, and broaden the application of VUV system in water treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Kleftogiannis, Dimitrios; Korfiati, Aigli; Theofilatos, Konstantinos; Likothanassis, Spiros; Tsakalidis, Athanasios; Mavroudi, Seferina
2013-06-01
Traditional biology was forced to restate some of its principles when the microRNA (miRNA) genes and their regulatory role were firstly discovered. Typically, miRNAs are small non-coding RNA molecules which have the ability to bind to the 3'untraslated region (UTR) of their mRNA target genes for cleavage or translational repression. Existing experimental techniques for their identification and the prediction of the target genes share some important limitations such as low coverage, time consuming experiments and high cost reagents. Hence, many computational methods have been proposed for these tasks to overcome these limitations. Recently, many researchers emphasized on the development of computational approaches to predict the participation of miRNA genes in regulatory networks and to analyze their transcription mechanisms. All these approaches have certain advantages and disadvantages which are going to be described in the present survey. Our work is differentiated from existing review papers by updating the methodologies list and emphasizing on the computational issues that arise from the miRNA data analysis. Furthermore, in the present survey, the various miRNA data analysis steps are treated as an integrated procedure whose aims and scope is to uncover the regulatory role and mechanisms of the miRNA genes. This integrated view of the miRNA data analysis steps may be extremely useful for all researchers even if they work on just a single step. Copyright © 2013 Elsevier Inc. All rights reserved.
Outcomes from a pilot study using computer-based rehabilitative tools in a military population.
Sullivan, Katherine W; Quinn, Julia E; Pramuka, Michael; Sharkey, Laura A; French, Louis M
2012-01-01
Novel therapeutic approaches and outcome data are needed for cognitive rehabilitation for patients with a traumatic brain injury; computer-based programs may play a critical role in filling existing knowledge gaps. Brain-fitness computer programs can complement existing therapies, maximize neuroplasticity, provide treatment beyond the clinic, and deliver objective efficacy data. However, these approaches have not been extensively studied in the military and traumatic brain injury population. Walter Reed National Military Medical Center established its Brain Fitness Center (BFC) in 2008 as an adjunct to traditional cognitive therapies for wounded warriors. The BFC offers commercially available "brain-training" products for military Service Members to use in a supportive, structured environment. Over 250 Service Members have utilized this therapeutic intervention. Each patient receives subjective assessments pre and post BFC participation including the Mayo-Portland Adaptability Inventory-4 (MPAI-4), the Neurobehavioral Symptom Inventory (NBSI), and the Satisfaction with Life Scale (SWLS). A review of the first 29 BFC participants, who finished initial and repeat measures, was completed to determine the effectiveness of the BFC program. Two of the three questionnaires of self-reported symptom change completed before and after participation in the BFC revealed a statistically significant reduction in symptom severity based on MPAI and NBSI total scores (p < .05). There were no significant differences in the SWLS score. Despite the typical limitations of a retrospective chart review, such as variation in treatment procedures, preliminary results reveal a trend towards improved self-reported cognitive and functional symptoms.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Ahmad, Ayesha
2014-02-01
Whilst there have been serious attempts to locate the practice of male circumcision for religious motives in the context of the (respective) religion's narrative and community, the debate, when referring to a clinical context, is often more nuanced. This article will contribute further to the debate by contextualising the Islamic practice of male circumcision within the clinical setting typical of a contemporary hospital. It specifically develops an additional complication; namely, the child has a pre-existing blood disorder. As an approach to contributing to the circumcision debate further, the ethics of a conscientious objection for secular motives towards a religiously-motivated clinical intervention will be explored. Overall, the discussion will provide relevance for such debates within the value-systems of a multi-cultural society. This article replicates several approaches to deconstructing a request for conscientious refusal of non-therapeutic circumcision by a Clinical Ethics Committee (CEC), bringing to light certain contradictions that occur in normatively categorizing motives for performing the circumcision. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Eagleson, P. S.
1985-01-01
Research activities conducted from February 1, 1985 to July 31, 1985 and preliminary conclusions regarding research objectives are summarized. The objective is to determine the feasibility of using LANDSAT data to estimate effective hydraulic properties of soils. The general approach is to apply the climatic-climax hypothesis (Ealgeson, 1982) to natural water-limited vegetation systems using canopy cover estimated from LANDSAT data. Natural water-limited systems typically consist of inhomogeneous vegetation canopies interspersed with bare soils. The ground resolution associated with one pixel from LANDSAT MSS (or TM) data is generally greater than the scale of the plant canopy or canopy clusters. Thus a method for resolving percent canopy cover at a subpixel level must be established before the Eagleson hypothesis can be tested. Two formulations are proposed which extend existing methods of analyzing mixed pixels to naturally vegetated landscapes. The first method involves use of the normalized vegetation index. The second approach is a physical model based on radiative transfer principles. Both methods are to be analyzed for their feasibility on selected sites.
Multi-perspective analysis and spatiotemporal mapping of air pollution monitoring data.
Kolovos, Alexander; Skupin, André; Jerrett, Michael; Christakos, George
2010-09-01
Space-time data analysis and assimilation techniques in atmospheric sciences typically consider input from monitoring measurements. The input is often processed in a manner that acknowledges characteristics of the measurements (e.g., underlying patterns, fluctuation features) under conditions of uncertainty; it also leads to the derivation of secondary information that serves study-oriented goals, and provides input to space-time prediction techniques. We present a novel approach that blends a rigorous space-time prediction model (Bayesian maximum entropy, BME) with a cognitively informed visualization of high-dimensional data (spatialization). The combined BME and spatialization approach (BME-S) is used to study monthly averaged NO2 and mean annual SO4 measurements in California over the 15-year period 1988-2002. Using the original scattered measurements of these two pollutants BME generates spatiotemporal predictions on a regular grid across the state. Subsequently, the prediction network undergoes the spatialization transformation into a lower-dimensional geometric representation, aimed at revealing patterns and relationships that exist within the input data. The proposed BME-S provides a powerful spatiotemporal framework to study a variety of air pollution data sources.
Computer-Aided Drug Design Methods.
Yu, Wenbo; MacKerell, Alexander D
2017-01-01
Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.
NASA Astrophysics Data System (ADS)
Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.
2016-09-01
Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.
Conceptual and methodological challenges to integrating SEA and cumulative effects assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunn, Jill, E-mail: jill.gunn@usask.c; Noble, Bram F.
The constraints to assessing and managing cumulative environmental effects in the context of project-based environmental assessment are well documented, and the potential benefits of a more strategic approach to cumulative effects assessment (CEA) are well argued; however, such benefits have yet to be clearly demonstrated in practice. While it is widely assumed that cumulative effects are best addressed in a strategic context, there has been little investigation as to whether CEA and strategic environmental assessment (SEA) are a 'good fit' - conceptually or methodologically. This paper identifies a number of conceptual and methodological challenges to the integration of CEA andmore » SEA. Based on results of interviews with international experts and practitioners, this paper demonstrates that: definitions and conceptualizations of CEA are typically weak in practice; approaches to effects aggregation vary widely; a systems perspective lacks in both SEA and CEA; the multifarious nature of SEA complicates CEA; tiering arrangements between SEA and project-based assessment are limited to non-existing; and the relationship of SEA to regional planning remains unclear.« less
Racial Inequality in Education in Brazil: A Twins Fixed-Effects Approach.
Marteleto, Letícia J; Dondero, Molly
2016-08-01
Racial disparities in education in Brazil (and elsewhere) are well documented. Because this research typically examines educational variation between individuals in different families, however, it cannot disentangle whether racial differences in education are due to racial discrimination or to structural differences in unobserved neighborhood and family characteristics. To address this common data limitation, we use an innovative within-family twin approach that takes advantage of the large sample of Brazilian adolescent twins classified as different races in the 1982 and 1987-2009 Pesquisa Nacional por Amostra de Domicílios. We first examine the contexts within which adolescent twins in the same family are labeled as different races to determine the characteristics of families crossing racial boundaries. Then, as a way to hold constant shared unobserved and observed neighborhood and family characteristics, we use twins fixed-effects models to assess whether racial disparities in education exist between twins and whether such disparities vary by gender. We find that even under this stringent test of racial inequality, the nonwhite educational disadvantage persists and is especially pronounced for nonwhite adolescent boys.
Handwritten word preprocessing for database adaptation
NASA Astrophysics Data System (ADS)
Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic
2013-01-01
Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.
Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less
Improvement of information fusion-based audio steganalysis
NASA Astrophysics Data System (ADS)
Kraetzer, Christian; Dittmann, Jana
2010-01-01
In the paper we extend an existing information fusion based audio steganalysis approach by three different kinds of evaluations: The first evaluation addresses the so far neglected evaluations on sensor level fusion. Our results show that this fusion removes content dependability while being capable of achieving similar classification rates (especially for the considered global features) if compared to single classifiers on the three exemplarily tested audio data hiding algorithms. The second evaluation enhances the observations on fusion from considering only segmental features to combinations of segmental and global features, with the result of a reduction of the required computational complexity for testing by about two magnitudes while maintaining the same degree of accuracy. The third evaluation tries to build a basis for estimating the plausibility of the introduced steganalysis approach by measuring the sensibility of the models used in supervised classification of steganographic material against typical signal modification operations like de-noising or 128kBit/s MP3 encoding. Our results show that for some of the tested classifiers the probability of false alarms rises dramatically after such modifications.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Minimal Prospects for Radio Detection of Extensive Air Showers in the Atmosphere of Jupiter
NASA Astrophysics Data System (ADS)
Bray, J. D.; Nelles, A.
2016-07-01
One possible approach for detecting ultra-high-energy cosmic rays and neutrinos is to search for radio emission from extensive air showers created when they interact in the atmosphere of Jupiter, effectively utilizing Jupiter as a particle detector. We investigate the potential of this approach. For searches with current or planned radio telescopes we find that the effective area for detection of cosmic rays is substantial (˜3 × 107 km2), but the acceptance angle is so small that the typical geometric aperture (˜103 km2 sr) is less than that of existing terrestrial detectors, and cosmic rays also cannot be detected below an extremely high threshold energy (˜1023 eV). The geometric aperture for neutrinos is slightly larger, and greater sensitivity can be achieved with a radio detector on a Jupiter-orbiting satellite, but in neither case is this sufficient to constitute a practical detection technique. Exploitation of the large surface area of Jupiter for detecting ultra-high-energy particles remains a long-term prospect that will require a different technique, such as orbital fluorescence detection.
Racial Inequality in Education in Brazil: A Twins Fixed-Effects Approach
Marteleto, Letícia; Dondero, Molly
2016-01-01
Racial disparities in education in Brazil (and elsewhere) are well documented. Because this research typically examines educational variation between individuals in different families, however, it cannot disentangle whether racial differences in education are due to racial discrimination or to structural differences in unobserved neighborhood and family characteristics. To address this common data limitation, we use an innovative within-family twin approach that takes advantage of the large sample of Brazilian adolescent twins classified as different races in the 1982 and 1987–2009 Pesquisa Nacional por Amostra de Domicílios. We first examine the contexts within which adolescent twins in the same family are labeled as different races to determine the characteristics of families crossing racial boundaries. Then, as a way to hold constant shared unobserved and observed neighborhood and family characteristics, we use twins fixed-effects models to assess whether racial disparities in education exist between twins and whether such disparities vary by gender. We find that even under this stringent test of racial inequality, the nonwhite educational disadvantage persists and is especially pronounced for nonwhite adolescent boys. PMID:27443551
Long-ranged contributions to solvation free energies from theory and short-ranged models
Remsing, Richard C.; Liu, Shule; Weeks, John D.
2016-01-01
Long-standing problems associated with long-ranged electrostatic interactions have plagued theory and simulation alike. Traditional lattice sum (Ewald-like) treatments of Coulomb interactions add significant overhead to computer simulations and can produce artifacts from spurious interactions between simulation cell images. These subtle issues become particularly apparent when estimating thermodynamic quantities, such as free energies of solvation in charged and polar systems, to which long-ranged Coulomb interactions typically make a large contribution. In this paper, we develop a framework for determining very accurate solvation free energies of systems with long-ranged interactions from models that interact with purely short-ranged potentials. Our approach is generally applicable and can be combined with existing computational and theoretical techniques for estimating solvation thermodynamics. We demonstrate the utility of our approach by examining the hydration thermodynamics of hydrophobic and ionic solutes and the solvation of a large, highly charged colloid that exhibits overcharging, a complex nonlinear electrostatic phenomenon whereby counterions from the solvent effectively overscreen and locally invert the integrated charge of the solvated object. PMID:26929375
Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems
NASA Astrophysics Data System (ADS)
Fry, C.; Dix, J.
2017-12-01
Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are sub-seismic in conventional imaging techniques. These unseen faults could create additional unseen pathways that impact construction in London via water ingress and influence fluid migration within hydrocarbon basins.
NASA Technical Reports Server (NTRS)
Swenson, Paul
2017-01-01
Satellite/Payload Ground Systems - Typically highly-customized to a specific mission's use cases - Utilize hundreds (or thousands!) of specialized point-to-point interfaces for data flows / file transfers Documentation and tracking of these complex interfaces requires extensive time to develop and extremely high staffing costs Implementation and testing of these interfaces are even more cost-prohibitive, and documentation often lags behind implementation resulting in inconsistencies down the road With expanding threat vectors, IT Security, Information Assurance and Operational Security have become key Ground System architecture drivers New Federal security-related directives are generated on a daily basis, imposing new requirements on current / existing ground systems - These mandated activities and data calls typically carry little or no additional funding for implementation As a result, Ground System Sustaining Engineering groups and Information Technology staff continually struggle to keep up with the rolling tide of security Advancing security concerns and shrinking budgets are pushing these large stove-piped ground systems to begin sharing resources - I.e. Operational / SysAdmin staff, IT security baselines, architecture decisions or even networks / hosting infrastructure Refactoring these existing ground systems into multi-mission assets proves extremely challenging due to what is typically very tight coupling between legacy components As a result, many "Multi-Mission" ops. environments end up simply sharing compute resources and networks due to the difficulty of refactoring into true multi-mission systems Utilizing continuous integration / rapid system deployment technologies in conjunction with an open architecture messaging approach allows System Engineers and Architects to worry less about the low-level details of interfaces between components and configuration of systems GMSEC messaging is inherently designed to support multi-mission requirements, and allows components to aggregate data across multiple homogeneous or heterogeneous satellites or payloads - The highly-successful Goddard Science and Planetary Operations Control Center (SPOCC) utilizes GMSEC as the hub for it's automation and situational awareness capability Shifts focus towards getting GS to a final configuration-managed baseline, as well as multi-mission / big-picture capabilities that help increase situational awareness, promote cross-mission sharing and establish enhanced fleet management capabilities across all levels of the enterprise.
Synchronized Trajectories in a Climate "Supermodel"
NASA Astrophysics Data System (ADS)
Duane, Gregory; Schevenhoven, Francine; Selten, Frank
2017-04-01
Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.
A Method for Evaluating Outcomes of Restoration When No Reference Sites Exist
J. Stephen Brewer; Timothy Menzel
2009-01-01
Ecological restoration typically seeks to shift species composition toward that of existing reference sites. Yet, comparing the assemblages in restored and reference habitats assumes that similarity to the reference habitat is the optimal outcome of restoration and does not provide a perspective on regionally rare off-site species. When no such reference assemblages of...
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
NASA Astrophysics Data System (ADS)
Guo, Lei; Obot, Ime Bassey; Zheng, Xingwen; Shen, Xun; Qiang, Yujie; Kaya, Savaş; Kaya, Cemal
2017-06-01
Steel is an important material in industry. Adding heterocyclic organic compounds have proved to be very efficient for steel protection. There exists an empirical rule that the general trend in the inhibition efficiencies of molecules containing heteroatoms is such that O < N < S. However, an atomic-level insight into the inhibition mechanism is still lacked. Thus, in this work, density functional theory calculations was used to investigate the adsorption of three typical heterocyclic molecules, i.e., pyrrole, furan, and thiophene, on Fe(110) surface. The approach is illustrated by carrying out geometric optimization of inhibitors on the stable and most exposed plane of α-Fe. Some salient features such as charge density difference, changes of work function, density of states were detailedly described. The present study is helpful to understand the afore-mentioned experiment rule.
Conjunctive programming: An interactive approach to software system synthesis
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.
Experimental and computational investigation of lateral gauge response in polycarbonate
NASA Astrophysics Data System (ADS)
Eliot, Jim; Harris, Ernst; Hazell, Paul; Appleby-Thomas, Gareth; Winter, Ronald; Wood, David; Owen, Gareth
2011-06-01
Polycarbonate's use in personal armour systems means its high strain-rate response has been extensively studied. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges need to be embedded in a central (typically) epoxy interlayer - an inherently invasive approach. Recently, research has suggested that in such metal systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. This work extends previous efforts such that similar impedance exists between the interlayer and matrix (target) interface. Further, experiments and simulations are presented investigating the effects of a ``dry joint'' in polycarbonate, in which no encapsulating medium is employed.
Toward the Modularization of Decision Support Systems
NASA Astrophysics Data System (ADS)
Raskin, R. G.
2009-12-01
Decision support systems are typically developed entirely from scratch without the use of modular components. This “stovepiped” approach is inefficient and costly because it prevents a developer from leveraging the data, models, tools, and services of other developers. Even when a decision support component is made available, it is difficult to know what problem it solves, how it relates to other components, or even that the component exists, The Spatial Decision Support (SDS) Consortium was formed in 2008 to organize the body of knowledge in SDS within a common portal. The portal identifies the canonical steps in the decision process and enables decision support components to be registered, categorized, and searched. This presentation describes how a decision support system can be assembled from modular models, data, tools and services, based on the needs of the Earth science application.
Kamala, KA; Sankethguddad, S; Sujith, SG; Tantradi, Praveena
2016-01-01
Burning mouth syndrome (BMS) is multifactorial in origin which is typically characterized by burning and painful sensation in an oral cavity demonstrating clinically normal mucosa. Although the cause of BMS is not known, a complex association of biological and psychological factors has been identified, suggesting the existence of a multifactorial etiology. As the symptom of oral burning is seen in various pathological conditions, it is essential for a clinician to be aware of how to differentiate between symptom of oral burning and BMS. An interdisciplinary and systematic approach is required for better patient management. The purpose of this study was to provide the practitioner with an understanding of the local, systemic, and psychosocial factors which may be responsible for oral burning associated with BMS, and review of treatment modalities, therefore providing a foundation for diagnosis and treatment of BMS. PMID:26962284
The Integrated Mission Design Center (IMDC) at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Karpati, Gabriel; Martin, John; Steiner, Mark; Reinhardt, K.
2002-01-01
NASA Goddard has used its Integrated Mission Design Center (IMDC) to perform more than 150 mission concept studies. The IMDC performs rapid development of high-level, end-to-end mission concepts, typically in just 4 days. The approach to the studies varies, depending on whether the proposed mission is near-future using existing technology, mid-future using new technology being actively developed, or far-future using technology which may not yet be clearly defined. The emphasis and level of detail developed during any particular study depends on which timeframe (near-, mid-, or far-future) is involved and the specific needs of the study client. The most effective mission studies are those where mission capabilities required and emerging technology developments can synergistically work together; thus both enhancing mission capabilities and providing impetus for ongoing technology development.
Solving the Software Legacy Problem with RISA
NASA Astrophysics Data System (ADS)
Ibarra, A.; Gabriel, C.
2012-09-01
Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.
3D printing via ambient reactive extrusion
Rios, Orlando; Carter, William G.; Post, Brian K.; ...
2018-03-14
Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less
3D printing via ambient reactive extrusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios, Orlando; Carter, William G.; Post, Brian K.
Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less
PageRank as a method to rank biomedical literature by importance.
Yates, Elliot J; Dixon, Louise C
2015-01-01
Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
Chacko, Anil; Kofler, Michael; Jarrett, Matthew
2014-01-01
Attention-deficit/hyperactivity disorder (ADHD) is a prevalent and chronic mental health condition that often results in substantial impairments throughout life. Although evidence-based pharmacological and psychosocial treatments exist for ADHD, effects of these treatments are acute, do not typically generalize into non-treated settings, rarely sustain over time, and insufficiently affect key areas of functional impairment (i.e., family, social, and academic functioning) and executive functioning. The limitations of current evidence-based treatments may be due to the inability of these treatments to address underlying neurocognitive deficits that are related to the symptoms of ADHD and associated areas of functional impairment. Although efforts have been made to directly target the underlying neurocognitive deficits of ADHD, extant neurocognitive interventions have shown limited efficacy, possibly due to misspecification of training targets and inadequate potency. We argue herein that despite these limitations, next-generation neurocognitive training programs that more precisely and potently target neurocognitive deficits may lead to optimal outcomes when used in combination with specific skill-based psychosocial treatments for ADHD. We discuss the rationale for such a combined treatment approach, prominent examples of this combined treatment approach for other mental health disorders, and potential combined treatment approaches for pediatric ADHD. Finally, we conclude with directions for future research necessary to develop a combined neurocognitive + skill-based treatment for youth with ADHD. PMID:25120200
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Photoelectron Effects on the Self-Consistent Potential in the Collisionless Polar Wind
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Liemohn, M. W.; Moore, T. E.
1997-01-01
The presence of unthermalized photoelectrons in the sunlit polar cap leads to an enhanced ambipolar potential drop and enhanced upward ion acceleration. Observations in the topside ionosphere have led to the conclusion that large-scale electrostatic potential drops exist above the spacecraft along polar magnetic field lines connected to regions of photoelectron production. A kinetic approach is used for the O(+), H(+), and photoelectron (p) distributions, while a fluid approach is used to describe the thermal electrons (e) and self-consistent electric field (E(sub II)) electrons are allowed to carry a flux that compensates for photoelectron escape, a critical assumption. Collisional processes are excluded, leading to easier escape of polar wind particles and therefore to the formation of the largest potential drop consistent with this general approach. We compute the steady state electric field enhancement and net potential drop expected in the polar wind due to the presence of photoelectrons as a function of the fractional photoelectron content and the thermal plasma characteristics. For a set of low-altitude boundary conditions typical of the polar wind ionosphere, including 0.1% photoelectron content, we found a potential drop from 500 km to 5 R(sub E) of 6.5 V and a maximum thermal electron temperature of 8800 K. The reasonable agreement of our results with the observed polar wind suggests that the assumptions of this approach are valid.
Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.
O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R
2016-02-01
Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.
A Search for Giant Planet Companions to T Tauri Stars
2012-12-20
yielded a spectral resolving power of R ≡ (λ/Δλ) ≈ 60,000. Integration times were typically 1800 s (depending on conditions) and typical seeing was∼2...wavelength regions. This suggests different physical mechanisms underlying the optical and the K-band variability. Key words: planets and satellites ...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
Kawakami, Tsuyoshi
2011-12-01
Participatory approaches are increasingly applied to improve safety, health and working conditions of grassroots workplaces in Asia. The core concepts and methods in human ergology research such as promoting real work life studies, relying on positive efforts of local people (daily life-technology), promoting active participation of local people to identify practical solutions, and learning from local human networks to reach grassroots workplaces, have provided useful viewpoints to devise such participatory training programmes. This study was aimed to study and analyze how human ergology approaches were applied in the actual development and application of three typical participatory training programmes: WISH (Work Improvement for Safe Home) with home workers in Cambodia, WISCON (Work Improvement in Small Construction Sites) with construction workers in Thailand, and WARM (Work Adjustment for Recycling and Managing Waste) with waste collectors in Fiji. The results revealed that all the three programmes, in the course of their developments, commonly applied direct observation methods of the work of target workers before devising the training programmes, learned from existing local good examples and efforts, and emphasized local human networks for cooperation. These methods and approaches were repeatedly applied in grassroots workplaces by taking advantage of their the sustainability and impacts. It was concluded that human ergology approaches largely contributed to the developments and expansion of participatory training programmes and could continue to support the self-help initiatives of local people for promoting human-centred work.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
In the Beginning-There Is the Introduction-and Your Study Hypothesis.
Vetter, Thomas R; Mascha, Edward J
2017-05-01
Writing a manuscript for a medical journal is very akin to writing a newspaper article-albeit a scholarly one. Like any journalist, you have a story to tell. You need to tell your story in a way that is easy to follow and makes a compelling case to the reader. Although recommended since the beginning of the 20th century, the conventional Introduction-Methods-Results-And-Discussion (IMRAD) scientific reporting structure has only been the standard since the 1980s. The Introduction should be focused and succinct in communicating the significance, background, rationale, study aims or objectives, and the primary (and secondary, if appropriate) study hypotheses. Hypothesis testing involves posing both a null and an alternative hypothesis. The null hypothesis proposes that no difference or association exists on the outcome variable of interest between the interventions or groups being compared. The alternative hypothesis is the opposite of the null hypothesis and thus typically proposes that a difference in the population does exist between the groups being compared on the parameter of interest. Most investigators seek to reject the null hypothesis because of their expectation that the studied intervention does result in a difference between the study groups or that the association of interest does exist. Therefore, in most clinical and basic science studies and manuscripts, the alternative hypothesis is stated, not the null hypothesis. Also, in the Introduction, the alternative hypothesis is typically stated in the direction of interest, or the expected direction. However, when assessing the association of interest, researchers typically look in both directions (ie, favoring 1 group or the other) by conducting a 2-tailed statistical test because the true direction of the effect is typically not known, and either direction would be important to report.
NASA Technical Reports Server (NTRS)
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
NASA Astrophysics Data System (ADS)
Doerr, Stefan; Santin, Cristina; Reardon, James; Mataix-Solera, Jorge; Stoof, Cathelijne; Bryant, Rob; Miesel, Jessica; Badia, David
2017-04-01
Heat transfer from the combustion of ground fuels and soil organic matter during vegetation fires can cause substantial changes to the physical, chemical and biological characteristics of soils. Numerous studies have investigated the effects of wildfires and prescribed burns on soil properties based either on field samples or using laboratory experiments. Critical thresholds for changes in soil properties, however, have been determined largely based on laboratory heating experimentation. These experimental approaches have been criticized for being inadequate for reflecting the actual heating patterns soil experienced in vegetation fires, which remain poorly understood. To address this research gap, this study reviews existing and evaluates new field data on key soil heating parameters determined during wildfires and prescribed burns from a wide range of environments. The results highlight the high spatial and temporal variability in soil heating patters not only between, but also within fires. Most wildfires and prescribed burns are associated with heat pulses that are much shorter than those typically applied in laboratory studies, which can lead to erroneous conclusions when results from laboratory studies are used to predict fire impacts on soils in the field.
NASA Astrophysics Data System (ADS)
Anton, S. R.; Erturk, A.; Inman, D. J.
2010-04-01
Vibration energy harvesting has received considerable attention in the research community over the past decade. Typical vibration harvesting systems are designed to be added on to existing host structures and capture ambient vibration energy. An interesting application of vibration energy harvesting exists in unmanned aerial vehicles (UAVs), where a multifunctional approach, as opposed to the traditional method, is needed due to weight and aerodynamic considerations. The authors propose a multifunctional design for energy harvesting in UAVs where the piezoelectric harvesting device is integrated into the wing of a UAV and provides energy harvesting, energy storage, and load bearing capability. The brittle piezoceramic layer of the harvester is a critical member in load bearing applications; therefore, it is the goal of this research to investigate the bending strength of various common piezoceramic materials. Three-point bend tests are carried out on several piezoelectric ceramics including monolithic piezoceramics PZT-5A and PZT-5H, single crystal piezoelectric PMN-PZT, and commercially packaged QuickPack devices. Bending strength results are reported and can be used as a design tool in the development of piezoelectric vibration energy harvesting systems in which the active device is subjected to bending loads.
SCOUT: simultaneous time segmentation and community detection in dynamic networks
Hulovatyy, Yuriy; Milenković, Tijana
2016-01-01
Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879
Smallpox and live-virus vaccination in transplant recipients.
Fishman, Jay A
2003-07-01
Recent bioterrorism raises the specter of reemergence of smallpox as a clinical entity. The mortality of variola major infection ('typical smallpox') was approximately 30% in past outbreaks. Programs for smallpox immunization for healthcare workers have been proposed. Atypical forms of smallpox presenting with flat or hemorrhagic skin lesions are most common in individuals with immune deficits with historic mortality approaching 100%. Smallpox vaccination, even after exposure, is highly effective. Smallpox vaccine contains a highly immunogenic live virus, vaccinia. Few data exist for the impact of variola or safety of vaccinia in immunocompromised hosts. Both disseminated infection by vaccinia and person-to-person spread after vaccination are uncommon. When it occurs, secondary vaccinia has usually affected individuals with pre-existing skin conditions (atopic dermatitis or eczema) or with other underlying immune deficits. Historically, disseminated vaccinia infection was uncommon but often fatal even in the absence of the most severe form of disease, "progressive vaccinia". Some responded to vaccinia immune globulin. Smallpox exposure would be likely to cause significant mortality among immunocompromised hosts. In the absence of documented smallpox exposures, immunocompromised hosts should not be vaccinated against smallpox. Planning for bioterrorist events must include consideration of uniquely susceptible hosts.
Integrative genetic risk prediction using non-parametric empirical Bayes classification.
Zhao, Sihai Dave
2017-06-01
Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.
Multiobjective optimization of urban water resources: Moving toward more practical solutions
NASA Astrophysics Data System (ADS)
Mortazavi, Mohammad; Kuczera, George; Cui, Lijie
2012-03-01
The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.
A three-dimensional meso-macroscopic model for Li-Ion intercalation batteries
Allu, S.; Kalnaus, S.; Simunovic, S.; ...
2016-06-09
Through this study, we present a three-dimensional computational formulation for electrode-electrolyte-electrode system of Li-Ion batteries. The physical consistency between electrical, thermal and chemical equations is enforced at each time increment by driving the residual of the resulting coupled system of nonlinear equations to zero. The formulation utilizes a rigorous volume averaging approach typical of multiphase formulations used in other fields and recently extended to modeling of supercapacitors [1]. Unlike existing battery modeling methods which use segregated solution of conservation equations and idealized geometries, our unified approach can model arbitrary battery and electrode configurations. The consistency of multi-physics solution also allowsmore » for consideration of a wide array of initial conditions and load cases. The formulation accounts for spatio-temporal variations of material and state properties such as electrode/void volume fractions and anisotropic conductivities. The governing differential equations are discretized using the finite element method and solved using a nonlinearly consistent approach that provides robust stability and convergence. The new formulation was validated for standard Li-ion cells and compared against experiments. Finally, its scope and ability to capture spatio-temporal variations of potential and lithium distribution is demonstrated on a prototypical three-dimensional electrode problem.« less
In-Situ Transfer Standard and Coincident-View Intercomparisons for Sensor Cross-Calibration
NASA Technical Reports Server (NTRS)
Thome, Kurt; McCorkel, Joel; Czapla-Myers, Jeff
2013-01-01
There exist numerous methods for accomplishing on-orbit calibration. Methods include the reflectance-based approach relying on measurements of surface and atmospheric properties at the time of a sensor overpass as well as invariant scene approaches relying on knowledge of the temporal characteristics of the site. The current work examines typical cross-calibration methods and discusses the expected uncertainties of the methods. Data from the Advanced Land Imager (ALI), Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), Moderate Resolution Imaging Spectroradiometer (MODIS), and Thematic Mapper (TM) are used to demonstrate the limits of relative sensor-to-sensor calibration as applied to current sensors while Landsat-5 TM and Landsat-7 ETM+ are used to evaluate the limits of in situ site characterizations for SI-traceable cross calibration. The current work examines the difficulties in trending of results from cross-calibration approaches taking into account sampling issues, site-to-site variability, and accuracy of the method. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The results show that cross calibrations with absolute uncertainties lesser than 1.5 percent (1 sigma) are currently achievable even for sensors without coincident views.
Taekwondo trainees' satisfaction towards using the virtual taekwondo training environment prototype
NASA Astrophysics Data System (ADS)
Jelani, Nur Ain Mohd; Zulkifli, Abdul Nasir; Ismail, Salina; Yusoff, Mohd Fitri
2017-10-01
Taekwondo is among the most popular martial arts which have existed more than 3000 years ago and have millions of followers all around the world. The typical taekwondo training session takes place in a hall or large open spaces in the presence of a trainer. Even though this is the most widely used approach of Taekwondo training, this approach has some limitations in supporting self-directed training. Self-directed taekwondo training is required for the trainees to improve their skills and performance. There are varieties of supplementary taekwondo training materials available, however, most of them are still lacking in terms of three-dimensional visualization. This paper introduces the Virtual Taekwondo Training Environment (VT2E) prototype for self-directed training. The aim of this paper is to determine whether the intervention of the new taekwondo training approach using virtual reality contributes to the trainees' satisfaction in self-directed training. Pearson Correlation and Regression analyses were used to determine the effects of Engaging, Presence, Usefulness and Ease of Use on trainees' satisfaction in using the prototype. The results provide empirical support for the positive and statistically significant relationship between Usefulness and Ease of Use and trainees' satisfaction for taekwondo training. However, Engaging and Presence do not have a positive and significant relationship with trainees' satisfaction for self-directed training.
Surfacing the deep data of taxonomy
Page, Roderic D. M.
2016-01-01
Abstract Taxonomic databases are perpetuating approaches to citing literature that may have been appropriate before the Internet, often being little more than digitised 5 × 3 index cards. Typically the original taxonomic literature is either not cited, or is represented in the form of a (typically abbreviated) text string. Hence much of the “deep data” of taxonomy, such as the original descriptions, revisions, and nomenclatural actions are largely hidden from all but the most resourceful users. At the same time there are burgeoning efforts to digitise the scientific literature, and much of this newly available content has been assigned globally unique identifiers such as Digital Object Identifiers (DOIs), which are also the identifier of choice for most modern publications. This represents an opportunity for taxonomic databases to engage with digitisation efforts. Mapping the taxonomic literature on to globally unique identifiers can be time consuming, but need be done only once. Furthermore, if we reuse existing identifiers, rather than mint our own, we can start to build the links between the diverse data that are needed to support the kinds of inference which biodiversity informatics aspires to support. Until this practice becomes widespread, the taxonomic literature will remain balkanized, and much of the knowledge that it contains will linger in obscurity. PMID:26877663
Niche harmony search algorithm for detecting complex disease associated high-order SNP combinations.
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; He, Zongzhen; Liu, Yajun; Liu, Zhaowen
2017-09-14
Genome-wide association study is especially challenging in detecting high-order disease-causing models due to model diversity, possible low or even no marginal effect of the model, and extraordinary search and computations. In this paper, we propose a niche harmony search algorithm where joint entropy is utilized as a heuristic factor to guide the search for low or no marginal effect model, and two computationally lightweight scores are selected to evaluate and adapt to diverse of disease models. In order to obtain all possible suspected pathogenic models, niche technique merges with HS, which serves as a taboo region to avoid HS trapping into local search. From the resultant set of candidate SNP-combinations, we use G-test statistic for testing true positives. Experiments were performed on twenty typical simulation datasets in which 12 models are with marginal effect and eight ones are with no marginal effect. Our results indicate that the proposed algorithm has very high detection power for searching suspected disease models in the first stage and it is superior to some typical existing approaches in both detection power and CPU runtime for all these datasets. Application to age-related macular degeneration (AMD) demonstrates our method is promising in detecting high-order disease-causing models.
Missing value imputation for gene expression data by tailored nearest neighbors.
Faisal, Shahla; Tutz, Gerhard
2017-04-25
High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.
Condition assessment survey of onsite sewage disposal systems (OSDSs) in Hawaii.
Babcock, Roger W; Lamichhane, Krishna M; Cummings, Michael J; Cheong, Gloria H
2014-01-01
Onsite sewage disposal systems (OSDSs) are the third leading cause of groundwater contamination in the USA. The existing condition of OSDSs in the State of Hawaii was investigated to determine whether a mandatory management program should be implemented. Based on observed conditions, OSDSs were differentiated into four categories: 'pass', 'sludge scum', 'potential failure' and 'fail'. Of all OSDSs inspected, approximately 68% appear to be in good working condition while the remaining 32% are failing or are in danger of failing. Homeowner interviews found that 80% of OSDSs were not being serviced in any way. About 70% of effluent samples had values of total-N and total-P greater than typical values and 40% had total suspended solids (TSS) and 5-day biochemical oxygen demand (BOD5) greater than typical values. The performance of aerobic treatment units (ATUs) was no better than septic tanks and cesspools indicating that the State's approach of requiring but not enforcing maintenance contracts for ATUs is not working. In addition, effluent samples from OSDSs located in drinking water wells estimated 2-year capture zones had higher average concentrations of TSS, BOD5, and total-P than units outside of these zones, indicating the potential for contamination. These findings suggest the need to introduce a proactive, life-cycle OSDS management program in the State of Hawaii.
Pan, Haitao; Yuan, Ying; Xia, Jielai
2017-11-01
A biosimilar refers to a follow-on biologic intended to be approved for marketing based on biosimilarity to an existing patented biological product (i.e., the reference product). To develop a biosimilar product, it is essential to demonstrate biosimilarity between the follow-on biologic and the reference product, typically through two-arm randomization trials. We propose a Bayesian adaptive design for trials to evaluate biosimilar products. To take advantage of the abundant historical data on the efficacy of the reference product that is typically available at the time a biosimilar product is developed, we propose the calibrated power prior, which allows our design to adaptively borrow information from the historical data according to the congruence between the historical data and the new data collected from the current trial. We propose a new measure, the Bayesian biosimilarity index, to measure the similarity between the biosimilar and the reference product. During the trial, we evaluate the Bayesian biosimilarity index in a group sequential fashion based on the accumulating interim data, and stop the trial early once there is enough information to conclude or reject the similarity. Extensive simulation studies show that the proposed design has higher power than traditional designs. We applied the proposed design to a biosimilar trial for treating rheumatoid arthritis.
NASA Astrophysics Data System (ADS)
Oh, T.
2014-12-01
Typical studies on natural resources from a social science perspective tend to choose one type of resource—water, for example— and ask what factors contribute to the sustainable use or wasteful exploitation of that resource. However, climate change and economic development, which are causing increased pressure on local resources and presenting communities with increased levels of tradeoffs and potential conflicts, force us to consider the trade-offs between options for using a particular resource. Therefore, the transdisciplinary approach that accurately captures the advantages and disadvantages of various possible resource uses is particularly important in the complex social-ecological systems, where concerns about inequality with respect to resource use and access have become unavoidable. Needless to say, resource management and policy require sound scientific understanding of the complex interconnections between nature and society, however, in contrast to typical international discussions, I discuss Japan not as an "advanced" case where various dilemmas have been successfully addressed by the government through the optimal use of technology, but rather as a nation seeing an emerging trend that is based on a awareness of the connections between local resources and the environment. Furthermore, from a historical viewpoint, the nexus of local resources is not a brand-new idea in the experience of environmental governance in Japan. There exist the local environment movements, which emphasized the interconnection of local resources and succeeded in urging the governmental action and policymaking. For this reason, local movements and local knowledge for the resource governance warrant attention. This study focuses on the historical cases relevant to water resource management including groundwater, and considers the contexts and conditions to holistically address local resource problems, paying particular attention to interactions between science and society. I will argue the research design to enhance the holistic view of local stakeholders on local resources as the key to effective transdisciplinary approach through the on-going research project focusing on the water-energy-food nexus.
A Monte Carlo approach applied to ultrasonic non-destructive testing
NASA Astrophysics Data System (ADS)
Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.
2012-04-01
Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).
A Monte Carlo approach applied to ultrasonic non-destructive testing
NASA Astrophysics Data System (ADS)
Mosca, I.; Bilgili, F.; Meier, T. M.; Sigloch, K.
2011-12-01
Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and engineering structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to analyze ultrasonic waveforms measured at the surface of Plexiglas and rock samples, and to define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J
2014-07-01
High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.
Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.
2014-01-01
Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470
McClure, James E.; Berrill, Mark A.; Gray, William G.; ...
2016-09-02
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClure, James E.; Berrill, Mark A.; Gray, William G.
Here, multiphase flow in porous medium systems is typically modeled using continuum mechanical representations at the macroscale in terms of averaged quantities. These models require closure relations to produce solvable forms. One of these required closure relations is an expression relating fluid pressures, fluid saturations, and, in some cases, the interfacial area between the fluid phases, and the Euler characteristic. An unresolved question is whether the inclusion of these additional morphological and topological measures can lead to a non-hysteretic closure relation compared to the hysteretic forms that are used in traditional models, which typically do not include interfacial areas, ormore » the Euler characteristic. We develop a lattice-Boltzmann (LB) simulation approach to investigate the equilibrium states of a two-fluid-phase porous medium system, which include disconnected now- wetting phase features. The proposed approach is applied to a synthetic medium consisting of 1,964 spheres arranged in a random, non-overlapping, close-packed manner, yielding a total of 42,908 different equilibrium points. This information is evaluated using a generalized additive modeling approach to determine if a unique function from this family exists, which can explain the data. The variance of various model estimates is computed, and we conclude that, except for the limiting behavior close to a single fluid regime, capillary pressure can be expressed as a deterministic and non-hysteretic function of fluid saturation, interfacial area between the fluid phases, and the Euler characteristic. This work is unique in the methods employed, the size of the data set, the resolution in space and time, the true equilibrium nature of the data, the parameterizations investigated, and the broad set of functions examined. The conclusion of essentially non-hysteretic behavior provides support for an evolving class of two-fluid-phase flow in porous medium systems models.« less
Sunyit Visiting Faculty Research
2012-01-01
deblurring with Gaussian and impulse noise . Improvements in both PSNR and visual quality of IFASDA over a typical existing method are demonstrated...blurring Images Corrupted by Mixed Impulse plus Gaussian Noise / Department of Mathematics Syracuse University This work studies a problem of image...restoration that observed images are contaminated by Gaussian and impulse noise . Existing methods in the literature are based on minimizing an objective
Tell, Dina; Davidson, Denise; Camras, Linda A.
2014-01-01
Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently. PMID:24804098
O'Connor, Ben L; Hamada, Yuki; Bowen, Esther E; Grippo, Mark A; Hartmann, Heidi M; Patton, Terri L; Van Lonkhuyzen, Robert A; Carr, Adrianne E
2014-11-01
Large areas of public lands administered by the Bureau of Land Management and located in arid regions of the southwestern United States are being considered for the development of utility-scale solar energy facilities. Land-disturbing activities in these desert, alluvium-filled valleys have the potential to adversely affect the hydrologic and ecologic functions of ephemeral streams. Regulation and management of ephemeral streams typically falls under a spectrum of federal, state, and local programs, but scientifically based guidelines for protecting ephemeral streams with respect to land-development activities are largely nonexistent. This study developed an assessment approach for quantifying the sensitivity to land disturbance of ephemeral stream reaches located in proposed solar energy zones (SEZs). The ephemeral stream assessment approach used publicly-available geospatial data on hydrology, topography, surficial geology, and soil characteristics, as well as high-resolution aerial imagery. These datasets were used to inform a professional judgment-based score index of potential land disturbance impacts on selected critical functions of ephemeral streams, including flow and sediment conveyance, ecological habitat value, and groundwater recharge. The total sensitivity scores (sum of scores for the critical stream functions of flow and sediment conveyance, ecological habitats, and groundwater recharge) were used to identify highly sensitive stream reaches to inform decisions on developable areas in SEZs. Total sensitivity scores typically reflected the scores of the individual stream functions; some exceptions pertain to groundwater recharge and ecological habitats. The primary limitations of this assessment approach were the lack of high-resolution identification of ephemeral stream channels in the existing National Hydrography Dataset, and the lack of mechanistic processes describing potential impacts on ephemeral stream functions at the watershed scale. The primary strength of this assessment approach is that it allows watershed-scale planning for low-impact development in arid ecosystems; the qualitative scoring of potential impacts can also be adjusted to accommodate new geospatial data, and to allow for expert and stakeholder input into decisions regarding the identification and potential avoidance of highly sensitive stream reaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Ben L.; Hamada, Yuki; Bowen, Esther E.
2014-08-17
Large areas of public lands administered by the Bureau of Land Management and located in arid regions of the southwestern United States are being considered for the development of utility-scale solar energy facilities. Land-disturbing activities in these desert, alluvium-filled valleys have the potential to adversely affect the hydrologic and ecologic functions of ephemeral streams. Regulation and management of ephemeral streams typically falls under a spectrum of federal, state, and local programs, but scientifically based guidelines for protecting ephemeral streams with respect to land-development activities are largely nonexistent. This study developed an assessment approach for quantifying the sensitivity to land disturbancemore » of ephemeral stream reaches located in proposed solar energy zones (SEZs). The ephemeral stream assessment approach used publicly-available geospatial data on hydrology, topography, surficial geology, and soil characteristics, as well as highresolution aerial imagery. These datasets were used to inform a professional judgment-based score index of potential land disturbance impacts on selected critical functions of ephemeral streams, including flow and sediment conveyance, ecological habitat value, and groundwater recharge. The total sensitivity scores (sum of scores for the critical stream functions of flow and sediment conveyance, ecological habitats, and groundwater recharge) were used to identify highly sensitive stream reaches to inform decisions on developable areas in SEZs. Total sensitivity scores typically reflected the scores of the individual stream functions; some exceptions pertain to groundwater recharge and ecological habitats. The primary limitations of this assessment approach were the lack of high-resolution identification of ephemeral stream channels in the existing National Hydrography Dataset, and the lack of mechanistic processes describing potential impacts on ephemeral stream functions at the watershed scale.The primary strength of this assessment approach is that it allows watershed-scale planning for low-impact development in arid ecosystems; the qualitative scoring of potential impacts can also be adjusted to accommodate new geospatial data, and to allow for expert and stakeholder input into decisions regarding the identification and potential avoidance of highly sensitive stream reaches.« less
Atypical resource allocation may contribute to many aspects of autism
Goldknopf, Emily J.
2013-01-01
Based on a review of the literature and on reports by people with autism, this paper suggests that atypical resource allocation is a factor that contributes to many aspects of autism spectrum conditions, including difficulties with language and social cognition, atypical sensory and attentional experiences, executive and motor challenges, and perceptual and conceptual strengths and weaknesses. Drawing upon resource theoretical approaches that suggest that perception, cognition, and action draw upon multiple pools of resources, the approach hypothesizes that compared with resources in typical cognition, resources in autism are narrowed or reduced, especially in people with strong sensory symptoms. In narrowed attention, resources are restricted to smaller areas and to fewer modalities, stages of processing, and cognitive processes than in typical cognition; narrowed resources may be more intense than in typical cognition. In reduced attentional capacity, overall resources are reduced; resources may be restricted to fewer modalities, stages of processing, and cognitive processes than in typical cognition, or the amount of resources allocated to each area or process may be reduced. Possible neural bases of the hypothesized atypical resource allocation, relations to other approaches, limitations, and tests of the hypotheses are discussed. PMID:24421760
Idealness and similarity in goal-derived categories: a computational examination.
Voorspoels, Wouter; Storms, Gert; Vanpaemel, Wolf
2013-02-01
The finding that the typicality gradient in goal-derived categories is mainly driven by ideals rather than by exemplar similarity has stood uncontested for nearly three decades. Due to the rather rigid earlier implementations of similarity, a key question has remained--that is, whether a more flexible approach to similarity would alter the conclusions. In the present study, we evaluated whether a similarity-based approach that allows for dimensional weighting could account for findings in goal-derived categories. To this end, we compared a computational model of exemplar similarity (the generalized context model; Nosofsky, Journal of Experimental Psychology. General 115:39-57, 1986) and a computational model of ideal representation (the ideal-dimension model; Voorspoels, Vanpaemel, & Storms, Psychonomic Bulletin & Review 18:1006-114, 2011) in their accounts of exemplar typicality in ten goal-derived categories. In terms of both goodness-of-fit and generalizability, we found strong evidence for an ideal approach in nearly all categories. We conclude that focusing on a limited set of features is necessary but not sufficient to account for the observed typicality gradient. A second aspect of ideal representations--that is, that extreme rather than common, central-tendency values drive typicality--seems to be crucial.
Dyslexic children lack word selectivity gradients in occipito-temporal and inferior frontal cortex.
Olulade, O A; Flowers, D L; Napoliello, E M; Eden, G F
2015-01-01
fMRI studies using a region-of-interest approach have revealed that the ventral portion of the left occipito-temporal cortex, which is specialized for orthographic processing of visually presented words (and includes the so-called "visual word form area", VWFA), is characterized by a posterior-to-anterior gradient of increasing selectivity for words in typically reading adults, adolescents, and children (e.g. Brem et al., 2006, 2009). Similarly, the left inferior frontal cortex (IFC) has been shown to exhibit a medial-to-lateral gradient of print selectivity in typically reading adults (Vinckier et al., 2007). Functional brain imaging studies of dyslexia have reported relative underactivity in left hemisphere occipito-temporal and inferior frontal regions using whole-brain analyses during word processing tasks. Hence, the question arises whether gradient sensitivities in these regions are altered in dyslexia. Indeed, a region-of-interest analysis revealed the gradient-specific functional specialization in the occipito-temporal cortex to be disrupted in dyslexic children (van der Mark et al., 2009). Building on these studies, we here (1) investigate if a word-selective gradient exists in the inferior frontal cortex in addition to the occipito-temporal cortex in normally reading children, (2) compare typically reading with dyslexic children, and (3) examine functional connections between these regions in both groups. We replicated the previously reported anterior-to-posterior gradient of increasing selectivity for words in the left occipito-temporal cortex in typically reading children, and its absence in the dyslexic children. Our novel finding is the detection of a pattern of increasing selectivity for words along the medial-to-lateral axis of the left inferior frontal cortex in typically reading children and evidence of functional connectivity between the most lateral aspect of this area and the anterior aspects of the occipito-temporal cortex. We report absence of an IFC gradient and connectivity between the lateral aspect of the IFC and the anterior occipito-temporal cortex in the dyslexic children. Together, our results provide insights into the source of the anomalies reported in previous studies of dyslexia and add to the growing evidence of an orthographic role of IFC in reading.
Cheng, Ryan R.; Uzawa, Takanori; Plaxco, Kevin W.; Makarov, Dmitrii E.
2010-01-01
The problem of determining the rate of end-to-end collisions for polymer chains has attracted the attention of theorists and experimentalists for more than three decades. The typical theoretical approach to this problem has focused on the case where a collision is defined as any instantaneous fluctuation that brings the chain ends to within a specific capture distance. In this paper, we study the more experimentally relevant case, where the end-to-end collision dynamics are probed by measuring the excited state lifetime of a fluorophore (or other lumiphore) attached to one chain end and quenched by a quencher group attached to the other end. Under this regime, a “contact” is defined not by the chain ends approach to within some sharp cutoff but, instead, typically by an exponentially distance-dependent process. Previous theoretical models predict that, if quenching is sufficiently rapid, a diffusion-controlled limit is attained, where such measurements report on the probe-independent, intrinsic end-to-end collision rate. In contrast, our theoretical considerations, simulations, and an analysis of experimental measurements of loop closure rates in single-stranded DNA molecules all indicate that no such limit exists, and that the measured effective collision rate has a nontrivial, fractional power-law dependence on both the intrinsic quenching rate of the fluorophore and the solvent viscosity. We propose a simple scaling formula describing the effective loop closure rate and its dependence on the viscosity, chain length, and properties of the probes. Previous theoretical results are limiting cases of this more general formula. PMID:19780594
A Life Cycle Based Approach to Multi-Hazard Risk Assessment
NASA Astrophysics Data System (ADS)
Keen, A. S.; Lynett, P. J.
2017-12-01
Small craft harbors are important facets to many coastal communities providing a transition from land to ocean. Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. However, tsunamis represent only one of many hazards a harbor is likely to see in California. Other natural hazards including tsunamis, wave attack, storm surge and sea level rise all can damage a harbor but are not typically addressed in traditional risk studies. Existing approaches to assess small craft harbor vulnerably typically look at single events assigning likely damage levels to each event. However, a harbor will likely experience damage from several different types of hazards over its service life with each event contributing proportionally to the total damage state. A new, fully probabilistic risk method will be presented which considers the distribution of return period for various hazards over a harbor's service life. The likelihood of failure is connected to each hazard via vulnerability curves. By simply tabulating the expected damage levels from each event, the method provides a quantitative measure of a harbor's risk to various types of hazards as well as the likelihood of failure (i.e. cumulative risk) during the service life. Crescent City Harbor in Northern California and Kings Harbor in Southern California have been chosen as case studies. Each harbor is dynamically different and were chosen to highlight the strengths and weaknesses of the method. Findings of each study will focus on assisting the stakeholders and decision makers to better understand the relative risk to each harbor with the goal of providing them with a tool to better plan for the future maritime environment.
Middleton, K; Al-Dujaili, S; Mei, X; Günther, A; You, L
2017-07-05
Bone cells exist in a complex environment where they are constantly exposed to numerous dynamic biochemical and mechanical stimuli. These stimuli regulate bone cells that are involved in various bone disorders, such as osteoporosis. Knowledge of how these stimuli affect bone cells have been utilised to develop various treatments, such as pharmaceuticals, hormone therapy, and exercise. To investigate the role that bone loading has on these disorders in vitro, bone cell mechanotransduction studies are typically performed using parallel plate flow chambers (PPFC). However, these chambers do not allow for dynamic cellular interactions among different cell populations to be investigated. We present a microfluidic approach that exposes different cell populations, which are located at physiologically relevant distances within adjacent channels, to different levels of fluid shear stress, and promotes cell-cell communication between the different channels. We employed this microfluidic system to assess mechanically regulated osteocyte-osteoclast communication. Osteoclast precursors (RAW264.7 cells) responded to cytokine gradients (e.g., RANKL, OPG, PGE-2) developed by both mechanically stimulated (fOCY) and unstimulated (nOCY) osteocyte-like MLO-Y4 cells simultaneously. Specifically, we observed increased osteoclast precursor cell densities and osteoclast differentiation towards nOCY. We also used this system to show an increased mechanoresponse of osteocytes when in co-culture with osteoclasts. We envision broad applicability of the presented approach for microfluidic perfusion co-culture of multiple cell types in the presence of fluid flow stimulation, and as a tool to investigate osteocyte mechanotransduction, as well as bone metastasis extravasation. This system could also be applied to any multi-cell population cross-talk studies that are typically performed using PPFCs (e.g. endothelial cells, smooth muscle cells, and fibroblasts). Copyright © 2017 Elsevier Ltd. All rights reserved.
Taylor, Mark J; Taylor, Natasha
2014-12-01
England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.
Parent-directed approaches to enrich the early language environments of children living in poverty.
Leffel, Kristin; Suskind, Dana
2013-11-01
Children's early language environments are critical for their cognitive development, school readiness, and ultimate educational attainment. Significant disparities exist in these environments, with profound and lasting impacts upon children's ultimate outcomes. Children from backgrounds of low socioeconomic status experience diminished language inputs and enter school at a disadvantage, with disparities persisting throughout their educational careers. Parents are positioned as powerful agents of change in their children's lives, however, and evidence indicates that parent-directed intervention is effective in improving child outcomes. This article explores the efficacy of parent-directed interventions and their potential applicability to the wider educational achievement gap seen in typically developing populations of low socioeconomic status and then describes efforts to develop such interventions with the Thirty Million Words Project and Project ASPIRE (Achieving Superior Parental Involvement for Rehabilitative Excellence) curricula. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Rational noncompliance with prescribed medical treatment.
Stewart, Douglas O; DeMarco, Joseph P
2010-09-01
Despite the attention that patient noncompliance has received from medical researchers, patient noncompliance remains poorly understood and difficult to alter. With a better theory of patient noncompliance, both greater success in achieving compliance and greater respect for patient decision making are likely. The theory presented, which uses a microeconomic approach, bridges a gap in the extant literature that has so far ignored the contributions of this classic perspective on decision making involving the tradeoff of costs and benefits. The model also generates a surprising conclusion: that patients are typically acting rationally when they refuse to comply with certain treatments. However, compliance is predicted to rise with increased benefits and reduced costs. The prediction that noncompliance is rational is especially true in chronic conditions at the point that treatment begins to move closer to the medically ideal treatment level. Although the details of this theory have not been tested empirically, it is well supported by existing prospective and retrospective studies.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.
Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan
2011-11-01
Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).
Asymmetry in mesial root number and morphology in mandibular second molars: a case report
Shetty, Shashit; Shekhar, Rhitu
2014-01-01
Ambiguity in the root morphology of the mandibular second molars is quite common. The most common root canal configuration is 2 roots and 3 canals, nonetheless other possibilities may still exist. The presence of accessory roots is an interesting example of anatomic root variation. While the presence of radix entomolaris or radix paramolaris is regarded as a typical clinical finding of a three-rooted mandibular second permanent molar, the occurrence of an additional mesial root is rather uncommon and represents a possibility of deviation from the regular norms. This case report describes successful endodontic management of a three-rooted mandibular second molar presenting with an unusual accessory mesial root, which was identified with the aid of multiangled radiographs and cone-beam computed tomography imaging. This article also discusses the prevalence, etiology, morphological variations, clinical approach to diagnosis, and significance of supernumerary roots in contemporary clinical dentistry. PMID:24516829
Opera: Reconstructing Optimal Genomic Scaffolds with High-Throughput Paired-End Sequences
Gao, Song; Sung, Wing-Kin
2011-01-01
Abstract Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/). PMID:21929371
Monte-Carlo Tree Search in Settlers of Catan
NASA Astrophysics Data System (ADS)
Szita, István; Chaslot, Guillaume; Spronck, Pieter
Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.
Flow Visualization at Cryogenic Conditions Using a Modified Pressure Sensitive Paint Approach
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Goad, William K.; Obara, Clifford J.; Sprinkle, Danny R.; Campbell, Richard L.; Carter, Melissa B.; Pendergraft, Odis C., Jr.; Bell, James H.; Ingram, JoAnne L.; Oglesby, Donald M.
2005-01-01
A modification to the Pressure Sensitive Paint (PSP) method was used to visualize streamlines on a Blended Wing Body (BWB) model at full-scale flight Reynolds numbers. In order to achieve these conditions, the tests were carried out in the National Transonic Facility operating under cryogenic conditions in a nitrogen environment. Oxygen is required for conventional PSP measurements, and several tests have been successfully completed in nitrogen environments by injecting small amounts (typically < 3000 ppm) of oxygen into the flow. A similar technique was employed here, except that air was purged through pressure tap orifices already existent on the model surface, resulting in changes in the PSP wherever oxygen was present. The results agree quite well with predicted results obtained through computational fluid dynamics analysis (CFD), which show this to be a viable technique for visualizing flows without resorting to more invasive procedures such as oil flow or minitufts.
Boss, R D; Lemmon, M E; Arnold, R M; Donohue, P K
2017-11-01
Delivering prognostic information to families requires clinicians to forecast an infant's illness course and future. We lack robust empirical data about how prognosis is shared and how that affects clinician-family concordance regarding infant outcomes. Prospective audiorecording of neonatal intensive care unit family conferences, immediately followed by parent/clinician surveys. Existing qualitative analysis frameworks were applied. We analyzed 19 conferences. Most prognostic discussion targeted predicted infant functional needs, for example, medications or feeding. There was little discussion of how infant prognosis would affect infant/family quality of life. Prognostic framing was typically optimistic. Most parents left the conference believing their infant's prognosis to be more optimistic than did clinicians. Clinician approach to prognostic disclosure in these audiotaped family conferences tended to be broad and optimistic, without detail regarding implications of infant health for infant/family quality of life. Families and clinicians left these conversations with little consensus about infant prognosis.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Four-dimensional gravity as an almost-Poisson system
NASA Astrophysics Data System (ADS)
Ita, Eyo Eyo
2015-04-01
In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems.
Pregger, Thomas; Friedrich, Rainer
2009-02-01
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.
Recovery of Dysphagia in Lateral Medullary Stroke
Gupta, Hitesh; Banerjee, Alakananda
2014-01-01
Lateral medullary stroke is typically associated with increased likelihood of occurrence of dysphagia and exhibits the most severe and persistent form. Worldwide little research exists on dysphagia in brainstem stroke. An estimated 15% of all patients admitted to stroke rehabilitation units experience a brainstem stroke out of which about 47% suffer from dysphagia. In India, a study showed that 22.3% of posterior circulation stroke patients develop dysphagia. Dearth of literature on dysphagia and its outcome in brainstem stroke particularly lateral medullary stroke motivated the author to present an actual case study of a patient who had dysphagia following a lateral medullary infarct. This paper documents the severity and management approach of dysphagia in brainstem stroke, with traditional dysphagia therapy and VitalStim therapy. Despite being diagnosed with a severe form of dysphagia followed by late treatment intervention, the patient had complete recovery of the swallowing function. PMID:25045555
Bozan, Mahir; Akyol, Çağrı; Ince, Orhan; Aydin, Sevcan; Ince, Bahar
2017-09-01
The anaerobic digestion of lignocellulosic wastes is considered an efficient method for managing the world's energy shortages and resolving contemporary environmental problems. However, the recalcitrance of lignocellulosic biomass represents a barrier to maximizing biogas production. The purpose of this review is to examine the extent to which sequencing methods can be employed to monitor such biofuel conversion processes. From a microbial perspective, we present a detailed insight into anaerobic digesters that utilize lignocellulosic biomass and discuss some benefits and disadvantages associated with the microbial sequencing techniques that are typically applied. We further evaluate the extent to which a hybrid approach incorporating a variation of existing methods can be utilized to develop a more in-depth understanding of microbial communities. It is hoped that this deeper knowledge will enhance the reliability and extent of research findings with the end objective of improving the stability of anaerobic digesters that manage lignocellulosic biomass.
Giarola, A; Rolandi, L
1977-01-14
The nosological, clinical, aetiopathogenetic and therapeutic aspects of hyperandrogenic micropolycystic ovary are examined with particular reference to matrimonial sterility. There is not doubt about the existence of a syndrome substantially characterized, clinically, by menstrual trouble, inability to procreate, more or less evident signs of hyperandrogenism and a tendency to obesity and, morphologically, by ovarian micropolycystic alterations of typical pathognomonic aspect: the marked production of androgens on the part of the female gonad possibly accompanied by peripheral alterations interfering with their metabolism. The syndrome is not too frequent and, in personal experience, occurs in less than 1% of the series. The main therapeutic approach remains cuneiform resection of the ovary. Still in personal experience, 21.2% of cases treated led to pregnancy but not more than eight-ten months after operation. The effect would therefore appear to be transitory and the operation is decisively rejected where unmarried women are involved.
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging
Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.
2016-01-01
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635
A multitasking general executive for compound continuous tasks.
Salvucci, Dario D
2005-05-06
As cognitive architectures move to account for increasingly complex real-world tasks, one of the most pressing challenges involves understanding and modeling human multitasking. Although a number of existing models now perform multitasking in real-world scenarios, these models typically employ customized executives that schedule tasks for the particular domain but do not generalize easily to other domains. This article outlines a general executive for the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture that, given independent models of individual tasks, schedules and interleaves the models' behavior into integrated multitasking behavior. To demonstrate the power of the proposed approach, the article describes an application to the domain of driving, showing how the general executive can interleave component subtasks of the driving task (namely, control and monitoring) and interleave driving with in-vehicle secondary tasks (radio tuning and phone dialing). 2005 Lawrence Erlbaum Associates, Inc.
NASA Astrophysics Data System (ADS)
Hilfinger, Andreas; Chen, Mark; Paulsson, Johan
2012-12-01
Studies of stochastic biological dynamics typically compare observed fluctuations to theoretically predicted variances, sometimes after separating the intrinsic randomness of the system from the enslaving influence of changing environments. But variances have been shown to discriminate surprisingly poorly between alternative mechanisms, while for other system properties no approaches exist that rigorously disentangle environmental influences from intrinsic effects. Here, we apply the theory of generalized random walks in random environments to derive exact rules for decomposing time series and higher statistics, rather than just variances. We show for which properties and for which classes of systems intrinsic fluctuations can be analyzed without accounting for extrinsic stochasticity and vice versa. We derive two independent experimental methods to measure the separate noise contributions and show how to use the additional information in temporal correlations to detect multiplicative effects in dynamical systems.
Murfee, Walter L.; Sweat, Richard S.; Tsubota, Ken-ichi; Gabhann, Feilim Mac; Khismatullin, Damir; Peirce, Shayn M.
2015-01-01
Microvascular network remodelling is a common denominator for multiple pathologies and involves both angiogenesis, defined as the sprouting of new capillaries, and network patterning associated with the organization and connectivity of existing vessels. Much of what we know about microvascular remodelling at the network, cellular and molecular scales has been derived from reductionist biological experiments, yet what happens when the experiments provide incomplete (or only qualitative) information? This review will emphasize the value of applying computational approaches to advance our understanding of the underlying mechanisms and effects of microvascular remodelling. Examples of individual computational models applied to each of the scales will highlight the potential of answering specific questions that cannot be answered using typical biological experimentation alone. Looking into the future, we will also identify the needs and challenges associated with integrating computational models across scales. PMID:25844149
A cochlear-bone wave can yield a hearing sensation as well as otoacoustic emission
Tchumatchenko, Tatjana; Reichenbach, Tobias
2014-01-01
A hearing sensation arises when the elastic basilar membrane inside the cochlea vibrates. The basilar membrane is typically set into motion through airborne sound that displaces the middle ear and induces a pressure difference across the membrane. A second, alternative pathway exists, however: stimulation of the cochlear bone vibrates the basilar membrane as well. This pathway, referred to as bone conduction, is increasingly used in headphones that bypass the ear canal and the middle ear. Furthermore, otoacoustic emissions, sounds generated inside the cochlea and emitted therefrom, may not involve the usual wave on the basilar membrane, suggesting that additional cochlear structures are involved in their propagation. Here we describe a novel propagation mode within the cochlea that emerges through deformation of the cochlear bone. Through a mathematical and computational approach we demonstrate that this propagation mode can explain bone conduction as well as numerous properties of otoacoustic emissions. PMID:24954736
Automated Data Cleansing in Data Harvesting and Data Migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Mark; Vowell, Lance; King, Ian
2011-03-16
In the proposal for this project, we noted how the explosion of digitized information available through corporate databases, data stores and online search systems has resulted in the knowledge worker being bombarded by information. Knowledge workers typically spend more than 20-30% of their time seeking and sorting information, only finding the information 50-60% of the time . This information exists as unstructured, semi-structured and structured data. The problem of information overload is compounded by the production of duplicate or near-duplicate information. In addition, near-duplicate items frequently have different origins, creating a situation in which each item may have unique informationmore » of value, but their differences are not significant enough to justify maintaining them as separate entities. Effective tools can be provided to eliminate duplicate and near-duplicate information. The proposed approach was to extract unique information from data sets and consolidation that information into a single comprehensive file.« less
Lawver, Timothy; Blankenship, Kelly
2008-01-01
Play therapy is a treatment modality in which the therapist engages in play with the child. Its use has been documented in a variety of settings and with a variety of diagnoses. Treating within the context of play brings the therapist and the therapy to the level of the child. By way of an introduction to this approach, a case is presented of a six-year-old boy with oppositional defiant disorder. The presentation focuses on the events and interactions of a typical session with an established patient. The primary issues of the session are aggression, self worth, and self efficacy. These themes manifest themselves through the content of the child’s play and narration of his actions. The therapist then reflects these back to the child while gently encouraging the child toward more positive play. Though the example is one of nondirective play therapy, a wide range of variation exists under the heading of play therapy. PMID:19724720
Parladé, Meaghan V.; Iverson, Jana M.
2012-01-01
From a dynamic systems perspective, transition points in development are times of increased instability, during which behavioral patterns are susceptible to temporary decoupling. This study investigated the impact of the vocabulary spurt on existing patterns of communicative coordination. Eighteen typically developing infants were videotaped at home 1 month before, at, and after the vocabulary spurt. Infants were identified as spurters if they underwent a discrete phase transition in vocabulary development (marked by an inflection point), and compared with a group of nonspurters whose word-learning rates followed a trajectory of continuous change. Relative to surrounding sessions, there were significant reductions in overall coordination of communicative behaviors and in words produced in coordination at the vocabulary spurt session for infants who experienced more dramatic vocabulary growth. In contrast, nonspurters demonstrated little change across sessions. Findings underscore the importance of transitions as opportunities for observing processes of developmental change. PMID:21219063
Discovering relevance knowledge in data: a growing cell structures approach.
Azuaje, F; Dubitzky, W; Black, N; Adamson, K
2000-01-01
Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.
[Self-Reflection From Group Dialogue: The Lived Experience of Psychiatric/Mental Health Nurses].
Chiang, Hsien-Hsien
2015-08-01
Self-reflection is an essential element of reflective practice for group facilitators. However, this element typically exists largely at the personal level and is not addressed in group dialogues of nurses. The purpose of this study was to explore the self-reflection of psychiatric nurses in a supervision group. A phenomenological approach was used to investigate the dialogues across 12 sessions in terms of discussion content and the reflective journals of the psychiatric nurse participants. The findings showed that two forms of self-reflection included: Embodied self-reflection derived from the physical sensibility and discursive self-reflection derived from the group dialogues. The embodied and discursive self-reflections promote self-awareness in nurses. The embodiment and initiation in the group facilitates the process of self-becoming through the group dialogue, which promotes self-examination and self-direction in healthcare professionals.
Risk and utility in portfolio optimization
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Natoli, Vincent D.
2003-06-01
Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.
Models and methods of emotional concordance.
Hollenstein, Tom; Lanteigne, Dianna
2014-04-01
Theories of emotion generally posit the synchronized, coordinated, and/or emergent combination of psychophysiological, cognitive, and behavioral components of the emotion system--emotional concordance--as a functional definition of emotion. However, the empirical support for this claim has been weak or inconsistent. As an introduction to this special issue on emotional concordance, we consider three domains of explanations as to why this theory-data gap might exist. First, theory may need to be revised to more accurately reflect past research. Second, there may be moderating factors such as emotion regulation, context, or individual differences that have obscured concordance. Finally, the methods typically used to test theory may be inadequate. In particular, we review a variety of potential issues: intensity of emotions elicited in the laboratory, nonlinearity, between- versus within-subject associations, the relative timing of components, bivariate versus multivariate approaches, and diversity of physiological processes. Copyright © 2013 Elsevier B.V. All rights reserved.
Beyond pairwise strategy updating in the prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Wang, Xiaofeng; Perc, Matjaž; Liu, Yongkui; Chen, Xiaojie; Wang, Long
2012-10-01
In spatial games players typically alter their strategy by imitating the most successful or one randomly selected neighbor. Since a single neighbor is taken as reference, the information stemming from other neighbors is neglected, which begets the consideration of alternative, possibly more realistic approaches. Here we show that strategy changes inspired not only by the performance of individual neighbors but rather by entire neighborhoods introduce a qualitatively different evolutionary dynamics that is able to support the stable existence of very small cooperative clusters. This leads to phase diagrams that differ significantly from those obtained by means of pairwise strategy updating. In particular, the survivability of cooperators is possible even by high temptations to defect and over a much wider uncertainty range. We support the simulation results by means of pair approximations and analysis of spatial patterns, which jointly highlight the importance of local information for the resolution of social dilemmas.
A performance study of live VM migration technologies: VMotion vs XenMotion
NASA Astrophysics Data System (ADS)
Feng, Xiujie; Tang, Jianxiong; Luo, Xuan; Jin, Yaohui
2011-12-01
Due to the growing demand of flexible resource management for cloud computing services, researches on live virtual machine migration have attained more and more attention. Live migration of virtual machine across different hosts has been a powerful tool to facilitate system maintenance, load balancing, fault tolerance and so on. In this paper, we use a measurement-based approach to compare the performance of two major live migration technologies under certain network conditions, i.e., VMotion and XenMotion. The results show that VMotion generates much less data transferred than XenMotion when migrating identical VMs. However, in network with moderate packet loss and delay, which are typical in a VPN (virtual private network) scenario used to connect the data centers, XenMotion outperforms VMotion in total migration time. We hope that this study can be helpful in choosing suitable virtualization environments for data center administrators and optimizing existing live migration mechanisms.
Recovery of Dysphagia in lateral medullary stroke.
Gupta, Hitesh; Banerjee, Alakananda
2014-01-01
Lateral medullary stroke is typically associated with increased likelihood of occurrence of dysphagia and exhibits the most severe and persistent form. Worldwide little research exists on dysphagia in brainstem stroke. An estimated 15% of all patients admitted to stroke rehabilitation units experience a brainstem stroke out of which about 47% suffer from dysphagia. In India, a study showed that 22.3% of posterior circulation stroke patients develop dysphagia. Dearth of literature on dysphagia and its outcome in brainstem stroke particularly lateral medullary stroke motivated the author to present an actual case study of a patient who had dysphagia following a lateral medullary infarct. This paper documents the severity and management approach of dysphagia in brainstem stroke, with traditional dysphagia therapy and VitalStim therapy. Despite being diagnosed with a severe form of dysphagia followed by late treatment intervention, the patient had complete recovery of the swallowing function.
Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2015-01-01
An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.
The Missions of National Commissions: Mapping the Forms and Functions of Bioethics Advisory Bodies.
Schmidt, Harald; Schwartz, Jason L
The findings, conclusions, and recommendations of national ethics commissions (NECs) have received considerable attention throughout the 40-year history of these groups in the United States and worldwide. However, the procedures or types of argument by which these bodies arrive at their decisions have received far less scrutiny. This paper explores how the diversity of ethical principles, concepts, or theories is featured in publications or decisions of these bodies, with particular emphasis on the need for NECs to be inclusive of pluralist positions that typically exist in contemporary democracies. The discussion is centered on the extent to which NECs may focus on providing focal frameworks, primarily framing the ethical issues at stake, or normative frameworks, additionally providing transparent justifications for any conclusions and recommendations that are made. The structure allows for assessments of the relative merits and drawbacks of different approaches in both theory and practice.
Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.
Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V
2016-09-07
Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems.
Safe teleradiology: information assurance as project planning methodology
NASA Astrophysics Data System (ADS)
Collmann, Jeff R.; Alaoui, Adil; Nguyen, Dan; Lindisch, David
2003-05-01
This project demonstrates use of OCTAVE, an information security risk assessment method, as an approach to the safe design and planning of a teleradiology system. By adopting this approach to project planning, we intended to provide evidence that including information security as an intrinsic component of project planning improves information assurance and that using information assurance as a planning tool produces and improves the general system management plan. Several considerations justify this approach to planning a safe teleradiology system. First, because OCTAVE was designed as a method for retrospectively assessing and proposing enhancements for the security of existing information management systems, it should function well as a guide to prospectively designing and deploying a secure information system such as teleradiology. Second, because OCTAVE provides assessment and planning tools for use primarily by interdisciplinary teams from user organizations, not consultants, it should enhance the ability of such teams at the local level to plan safe information systems. Third, from the perspective of sociological theory, OCTAVE explicitly attempts to enhance organizational conditions identified as necessary to safely manage complex technologies. Approaching information system design from the perspective of information security risk management proactively integrates health information assurance into a project"s core. This contrasts with typical approaches that perceive "security" as a secondary attribute to be "added" after designing the system and with approaches that identify information assurance only with security devices and user training. The perspective of health information assurance embraces so many dimensions of a computerized health information system"s design that one may successfully deploy a method for retrospectively assessing information security risk as a prospective planning tool. From a sociological perspective, this approach enhances the general conditions as well as establishes specific policies and procedures for reliable performance of health information assurance.
Liu, Dungang; Liu, Regina; Xie, Minge
2014-01-01
Meta-analysis has been widely used to synthesize evidence from multiple studies for common hypotheses or parameters of interest. However, it has not yet been fully developed for incorporating heterogeneous studies, which arise often in applications due to different study designs, populations or outcomes. For heterogeneous studies, the parameter of interest may not be estimable for certain studies, and in such a case, these studies are typically excluded from conventional meta-analysis. The exclusion of part of the studies can lead to a non-negligible loss of information. This paper introduces a metaanalysis for heterogeneous studies by combining the confidence density functions derived from the summary statistics of individual studies, hence referred to as the CD approach. It includes all the studies in the analysis and makes use of all information, direct as well as indirect. Under a general likelihood inference framework, this new approach is shown to have several desirable properties, including: i) it is asymptotically as efficient as the maximum likelihood approach using individual participant data (IPD) from all studies; ii) unlike the IPD analysis, it suffices to use summary statistics to carry out the CD approach. Individual-level data are not required; and iii) it is robust against misspecification of the working covariance structure of the parameter estimates. Besides its own theoretical significance, the last property also substantially broadens the applicability of the CD approach. All the properties of the CD approach are further confirmed by data simulated from a randomized clinical trials setting as well as by real data on aircraft landing performance. Overall, one obtains an unifying approach for combining summary statistics, subsuming many of the existing meta-analysis methods as special cases. PMID:26190875
Doing more with less - The new way of exploring the solar system
NASA Technical Reports Server (NTRS)
Ridenoure, Rex
1992-01-01
Exploration of the solar system is considered in the light of existing economic factors and scientific priorities, and a general blueprint for an exploration strategy is set forth. Attention is given to mission costs, typical schedules, and the scientific findings of typical projects which create the need for collaboration and diversification in mission development. The combined technologies and cooperative efforts of several small organizations can lead to missions with short schedules and low costs.
Doing more with less - The new way of exploring the solar system
NASA Astrophysics Data System (ADS)
Ridenoure, Rex
1992-08-01
Exploration of the solar system is considered in the light of existing economic factors and scientific priorities, and a general blueprint for an exploration strategy is set forth. Attention is given to mission costs, typical schedules, and the scientific findings of typical projects which create the need for collaboration and diversification in mission development. The combined technologies and cooperative efforts of several small organizations can lead to missions with short schedules and low costs.
Dietary Exposure Potential Model
Existing food consumption and contaminant residue databases, typically products of nutrition and regulatory monitoring, contain useful information to characterize dietary intake of environmental chemicals. A PC-based model with resident database system, termed the Die...
Habitability in different Milky Way stellar environments: a stellar interaction dynamical approach.
Jiménez-Torres, Juan J; Pichardo, Bárbara; Lake, George; Segura, Antígona
2013-05-01
Every Galactic environment is characterized by a stellar density and a velocity dispersion. With this information from literature, we simulated flyby encounters for several Galactic regions, numerically calculating stellar trajectories as well as orbits for particles in disks; our aim was to understand the effect of typical stellar flybys on planetary (debris) disks in the Milky Way Galaxy. For the solar neighborhood, we examined nearby stars with known distance, proper motions, and radial velocities. We found occurrence of a disturbing impact to the solar planetary disk within the next 8 Myr to be highly unlikely; perturbations to the Oort cloud seem unlikely as well. Current knowledge of the full phase space of stars in the solar neighborhood, however, is rather poor; thus we cannot rule out the existence of a star that is more likely to approach than those for which we have complete kinematic information. We studied the effect of stellar encounters on planetary orbits within the habitable zones of stars in more crowded stellar environments, such as stellar clusters. We found that in open clusters habitable zones are not readily disrupted; this is true if they evaporate in less than 10(8) yr. For older clusters the results may not be the same. We specifically studied the case of Messier 67, one of the oldest open clusters known, and show the effect of this environment on debris disks. We also considered the conditions in globular clusters, the Galactic nucleus, and the Galactic bulge-bar. We calculated the probability of whether Oort clouds exist in these Galactic environments.
Basic Emotions in Human Neuroscience: Neuroimaging and Beyond.
Celeghin, Alessia; Diano, Matteo; Bagnis, Arianna; Viola, Marco; Tamietto, Marco
2017-01-01
The existence of so-called 'basic emotions' and their defining attributes represents a long lasting and yet unsettled issue in psychology. Recently, neuroimaging evidence, especially related to the advent of neuroimaging meta-analytic methods, has revitalized this debate in the endeavor of systems and human neuroscience. The core theme focuses on the existence of unique neural bases that are specific and characteristic for each instance of basic emotion. Here we review this evidence, outlining contradictory findings, strengths and limits of different approaches. Constructionism dismisses the existence of dedicated neural structures for basic emotions, considering that the assumption of a one-to-one relationship between neural structures and their functions is central to basic emotion theories. While these critiques are useful to pinpoint current limitations of basic emotions theories, we argue that they do not always appear equally generative in fostering new testable accounts on how the brain relates to affective functions. We then consider evidence beyond PET and fMRI, including results concerning the relation between basic emotions and awareness and data from neuropsychology on patients with focal brain damage. Evidence from lesion studies are indeed particularly informative, as they are able to bring correlational evidence typical of neuroimaging studies to causation, thereby characterizing which brain structures are necessary for, rather than simply related to, basic emotion processing. These other studies shed light on attributes often ascribed to basic emotions, such as automaticity of perception, quick onset, and brief duration. Overall, we consider that evidence in favor of the neurobiological underpinnings of basic emotions outweighs dismissive approaches. In fact, the concept of basic emotions can still be fruitful, if updated to current neurobiological knowledge that overcomes traditional one-to-one localization of functions in the brain. In particular, we propose that the structure-function relationship between brain and emotions is better described in terms of pluripotentiality, which refers to the fact that one neural structure can fulfill multiple functions, depending on the functional network and pattern of co-activations displayed at any given moment.
ENKI - An Open Source environmental modelling platfom
NASA Astrophysics Data System (ADS)
Kolberg, S.; Bruland, O.
2012-04-01
The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.
Huang, Xinyi; Li, Fan; Chen, Jiakuan
2016-03-01
Although China has established more than 600 wetland nature reserves, conservation gaps still exist for many species, especially for freshwater fishes. Underlying this problem is the fact that top-level planning is missing in the construction of nature reserves. To promote the development of nature reserves for fishes, this study took the middle and lower reaches of the Yangtze River basin (MLYRB) as an example to carry out top-level reserve network planning for fishes using approaches of systematic conservation planning. Typical fish species living in freshwater habitats were defined and considered in the planning. Based on sample data collected from large quantities of literatures, continuous distribution patterns of 142 fishes were obtained with species distribution modeling and subsequent processing, and the distributions of another eleven species were artificially designated. With the distribution pattern of species, Marxan was used to carry out conservation planning. To obtain ideal solutions with representativeness, persistence, and efficiency, parameters were set with careful consideration regarding existing wetland reserves, human disturbances, hydrological connectivity, and representation targets of species. Marxan produced the selection frequency of planning units (PUs) and a best solution. Selection frequency indicates the relative protection importance of a PU. The best solution is a representative of ideal fish reserve networks. Both of the PUs with high selection frequency and those in the best solution have low proportions included in existing wetland nature reserves, suggesting that there are significant conservation gaps for fish species in MLYRB. The best solution could serve as a reference for establishing a fish reserve network in the MLYRB. There is great flexibility for replacing selected PUs in the solution, and such flexibility facilitates the implementation of the solution in reality in case of unexpected obstacles. Further, we suggested adopting a freshwater management framework in the implementation of such solution.
Implementing bicycle improvements at the local level
DOT National Transportation Integrated Search
1998-09-01
This implementational manual is intended for local governments who want to make improvements to existing conditions that affect bicycling. Thirteen of the most typical situations or factors that impact bicycle use are considered.
Achieving Airport Carbon Neutrality
DOT National Transportation Integrated Search
2016-03-01
This report is a guide for airports that wish to reduce or eliminate greenhouse gas (GHG) emissions from existing buildings and operations. Reaching carbon neutrality typically requires the use of multiple mechanisms to first minimize energy consumpt...
Creativity and Entrepreneurship: How Do They Relate?
ERIC Educational Resources Information Center
Whiting, Bruce G.
1988-01-01
Research reports are reviewed which illustrate the close ties between creativity and entrepreneurship. Comparisons of the characteristics and behavior typical of creative individuals and entrepreneurial individuals indicate the existence of striking similarities. (JDD)
NASA Astrophysics Data System (ADS)
Nicgorski, Dana; Avitabile, Peter
2010-07-01
Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.
The Structure of Borders in a Small World
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970
The structure of borders in a small world.
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-11-18
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitts, D.R.
1980-09-30
A conceptual design study for district heating of a 30-home subdivision located near the southeast extremity of the city of Elko, Nevada is presented. While a specific residential community was used in the study, the overall approach and methodologies are believed to be generally applicable for a large number of communities where low temperature geothermal fluid is available. The proposed district heating system utilizes moderate temperature, clean domestic water and existing community culinary water supply lines. The culinary water supply is heated by a moderate temperature geothermal source using a single heat exchanger at entry to the subdivision. The heatedmore » culinary water is then pumped to the houses in the community where energy is extracted by means of a water supplied heat pump. The use of heat pumps at the individual houses allows economic heating to result from supply of relatively cool water to the community, and this precludes the necessity of supplying objectionably hot water for normal household consumption use. Each heat pump unit is isolated from the consumptive water flow such that contamination of the water supply is avoided. The community water delivery system is modified to allow recirculation within the community, and very little rework of existing water lines is required. The entire system coefficient of performance (COP) for a typical year of heating is 3.36, exclusive of well pumping energy.« less
ERIC Educational Resources Information Center
Swarlis, Linda L.
2008-01-01
The test scores of spatial ability for women lag behind those of men in many spatial tests. On the Mental Rotations Test (MRT), a significant gender gap has existed for over 20 years and continues to exist. High spatial ability has been linked to efficiencies in typical computing tasks including Web and database searching, text editing, and…
CSAX: Characterizing Systematic Anomalies in eXpression Data.
Noto, Keith; Majidi, Saeed; Edlow, Andrea G; Wick, Heather C; Bianchi, Diana W; Slonim, Donna K
2015-05-01
Methods for translating gene expression signatures into clinically relevant information have typically relied upon having many samples from patients with similar molecular phenotypes. Here, we address the question of what can be done when it is relatively easy to obtain healthy patient samples, but when abnormalities corresponding to disease states may be rare and one-of-a-kind. The associated computational challenge, anomaly detection, is a well-studied machine-learning problem. However, due to the dimensionality and variability of expression data, existing methods based on feature space analysis or individual anomalously expressed genes are insufficient. We present a novel approach, CSAX, that identifies pathways in an individual sample in which the normal expression relationships are disrupted. To evaluate our approach, we have compiled and released a compendium of public expression data sets, reformulated to create a test bed for anomaly detection. We demonstrate the accuracy of CSAX on the data sets in our compendium, compare it to other leading methods, and show that CSAX aids in both identifying anomalies and explaining their underlying biology. We describe an approach to characterizing the difficulty of specific expression anomaly detection tasks. We then illustrate CSAX's value in two developmental case studies. Confirming prior hypotheses, CSAX highlights disruption of platelet activation pathways in a neonate with retinopathy of prematurity and identifies, for the first time, dysregulated oxidative stress response in second trimester amniotic fluid of fetuses with obese mothers. Our approach provides an important step toward identification of individual disease patterns in the era of precision medicine.
Schwientek, Marc; Rügner, Hermann; Scherer, Ulrike; Rode, Michael; Grathwohl, Peter
2017-12-01
The contamination of riverine sediments and suspended matter with hydrophobic pollutants is typically associated with urban land use. However, it is rarely related to the sediment supply of the watershed, because sediment yield data are often missing. We show for a suite of watersheds in two regions of Germany with contrasting land use and geology that the contamination of suspended particles with polycyclic aromatic hydrocarbons (PAH) can be explained by the ratio of inhabitants residing within the watershed and the watershed's sediment yield. The modeling of sediment yields is based on the Revised Universal Soil Loss Equation (RUSLE2015, Panagos et al., 2015) and the sediment delivery ratio (SDR). The applicability of this approach is demonstrated for watersheds ranging in size from 1.4 to 3000km 2 . The approach implies that the loading of particles with PAH can be assumed as time invariant. This is indicated by additional long-term measurements from sub-watersheds of the upper River Neckar basin, Germany. The parsimonious conceptual approach allows for reasonable predictions of the PAH loading of suspended sediments especially at larger scales. Our findings may easily be used to estimate the vulnerability of river systems to particle-associated urban pollutants with similar input pathways as the PAH or to indicate if contaminant point sources such as sites of legacy pollution exist in a river basin. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Random mutagenesis by error-prone pol plasmid replication in Escherichia coli.
Alexander, David L; Lilly, Joshua; Hernandez, Jaime; Romsdahl, Jillian; Troll, Christopher J; Camps, Manel
2014-01-01
Directed evolution is an approach that mimics natural evolution in the laboratory with the goal of modifying existing enzymatic activities or of generating new ones. The identification of mutants with desired properties involves the generation of genetic diversity coupled with a functional selection or screen. Genetic diversity can be generated using PCR or using in vivo methods such as chemical mutagenesis or error-prone replication of the desired sequence in a mutator strain. In vivo mutagenesis methods facilitate iterative selection because they do not require cloning, but generally produce a low mutation density with mutations not restricted to specific genes or areas within a gene. For this reason, this approach is typically used to generate new biochemical properties when large numbers of mutants can be screened or selected. Here we describe protocols for an advanced in vivo mutagenesis method that is based on error-prone replication of a ColE1 plasmid bearing the gene of interest. Compared to other in vivo mutagenesis methods, this plasmid-targeted approach allows increased mutation loads and facilitates iterative selection approaches. We also describe the mutation spectrum for this mutagenesis methodology in detail, and, using cycle 3 GFP as a target for mutagenesis, we illustrate the phenotypic diversity that can be generated using our method. In sum, error-prone Pol I replication is a mutagenesis method that is ideally suited for the evolution of new biochemical activities when a functional selection is available.
Adaptive skin segmentation via feature-based face detection
NASA Astrophysics Data System (ADS)
Taylor, Michael J.; Morris, Tim
2014-05-01
Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.
Transdisciplinary approaches enhance the production of translational knowledge.
Ciesielski, Timothy H; Aldrich, Melinda C; Marsit, Carmen J; Hiatt, Robert A; Williams, Scott M
2017-04-01
The primary goal of translational research is to generate and apply knowledge that can improve human health. Although research conducted within the confines of a single discipline has helped us to achieve this goal in many settings, this unidisciplinary approach may not be optimal when disease causation is complex and health decisions are pressing. To address these issues, we suggest that transdisciplinary approaches can facilitate the progress of translational research, and we review publications that demonstrate what these approaches can look like. These examples serve to (1) demonstrate why transdisciplinary research is useful, and (2) stimulate a conversation about how it can be further promoted. While we note that open-minded communication is a prerequisite for germinating any transdisciplinary work and that epidemiologists can play a key role in promoting it, we do not propose a rigid protocol for conducting transdisciplinary research, as one really does not exist. These achievements were developed in settings where typical disciplinary and institutional barriers were surmountable, but they were not accomplished with a single predetermined plan. The benefits of cross-disciplinary communication are hard to predict a priori and a detailed research protocol or process may impede the realization of novel and important insights. Overall, these examples demonstrate that enhanced cross-disciplinary information exchange can serve as a starting point that helps researchers frame better questions, integrate more relevant evidence, and advance translational knowledge more effectively. Specifically, we discuss examples where transdisciplinary approaches are helping us to better explore, assess, and intervene to improve human health. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Qinke; Jung, Seong Jun; Jang, Sung Kyu; Lee, Joohyun; Jeon, Insu; Suh, Hwansoo; Kim, Yong Ho; Lee, Young Hee; Lee, Sungjoo; Song, Young Jae
2015-06-01
We report the selective growth of large-area bilayered graphene film and multilayered graphene film on copper. This growth was achieved by introducing a reciprocal chemical vapor deposition (CVD) process that took advantage of an intermediate h-BN layer as a sacrificial template for graphene growth. A thin h-BN film, initially grown on the copper substrate using CVD methods, was locally etched away during the subsequent graphene growth under residual H2 and CH4 gas flows. Etching of the h-BN layer formed a channel that permitted the growth of additional graphene adlayers below the existing graphene layer. Bilayered graphene typically covers an entire Cu foil with domain sizes of 10-50 μm, whereas multilayered graphene can be epitaxially grown to form islands a few hundreds of microns in size. This new mechanism, in which graphene growth proceeded simultaneously with h-BN etching, suggests a potential approach to control graphene layers for engineering the band structures of large-area graphene for electronic device applications.We report the selective growth of large-area bilayered graphene film and multilayered graphene film on copper. This growth was achieved by introducing a reciprocal chemical vapor deposition (CVD) process that took advantage of an intermediate h-BN layer as a sacrificial template for graphene growth. A thin h-BN film, initially grown on the copper substrate using CVD methods, was locally etched away during the subsequent graphene growth under residual H2 and CH4 gas flows. Etching of the h-BN layer formed a channel that permitted the growth of additional graphene adlayers below the existing graphene layer. Bilayered graphene typically covers an entire Cu foil with domain sizes of 10-50 μm, whereas multilayered graphene can be epitaxially grown to form islands a few hundreds of microns in size. This new mechanism, in which graphene growth proceeded simultaneously with h-BN etching, suggests a potential approach to control graphene layers for engineering the band structures of large-area graphene for electronic device applications. Electronic supplementary information (ESI) available: The growth conditions, statistical studies of OM images and high-resolution STM/TEM measurements for multi-/bi-layered graphene are discussed in detail. See DOI: 10.1039/c5nr02716k
Euthanasia assessment in ebola virus infected nonhuman primates.
Warren, Travis K; Trefry, John C; Marko, Shannon T; Chance, Taylor B; Wells, Jay B; Pratt, William D; Johnson, Joshua C; Mucker, Eric M; Norris, Sarah L; Chappell, Mark; Dye, John M; Honko, Anna N
2014-11-24
Multiple products are being developed for use against filoviral infections. Efficacy for these products will likely be demonstrated in nonhuman primate models of filoviral disease to satisfy licensure requirements under the Animal Rule, or to supplement human data. Typically, the endpoint for efficacy assessment will be survival following challenge; however, there exists no standardized approach for assessing the health or euthanasia criteria for filovirus-exposed nonhuman primates. Consideration of objective criteria is important to (a) ensure test subjects are euthanized without unnecessary distress; (b) enhance the likelihood that animals exhibiting mild or moderate signs of disease are not prematurely euthanized; (c) minimize the occurrence of spontaneous deaths and loss of end-stage samples; (d) enhance the reproducibility of experiments between different researchers; and (e) provide a defensible rationale for euthanasia decisions that withstands regulatory scrutiny. Historic records were compiled for 58 surviving and non-surviving monkeys exposed to Ebola virus at the US Army Medical Research Institute of Infectious Diseases. Clinical pathology parameters were statistically analyzed and those exhibiting predicative value for survival are reported. These findings may be useful for standardization of objective euthanasia assessments in rhesus monkeys exposed to Ebola virus and may serve as a useful approach for other standardization efforts.
Team knowledge representation: a network perspective.
Espinosa, J Alberto; Clark, Mark A
2014-03-01
We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content. Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed. We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge. Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge. Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures. Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
NASA Astrophysics Data System (ADS)
Birrell, Paul J.; Zhang, Xu-Sheng; Pebody, Richard G.; Gay, Nigel J.; de Angelis, Daniela
2016-07-01
Understanding how the geographic distribution of and movements within a population influence the spatial spread of infections is crucial for the design of interventions to curb transmission. Existing knowledge is typically based on results from simulation studies whereas analyses of real data remain sparse. The main difficulty in quantifying the spatial pattern of disease spread is the paucity of available data together with the challenge of incorporating optimally the limited information into models of disease transmission. To address this challenge the role of routine migration on the spatial pattern of infection during the epidemic of 2009 pandemic influenza in England is investigated here through two modelling approaches: parallel-region models, where epidemics in different regions are assumed to occur in isolation with shared characteristics; and meta-region models where inter-region transmission is expressed as a function of the commuter flux between regions. Results highlight that the significantly less computationally demanding parallel-region approach is sufficiently flexible to capture the underlying dynamics. This suggests that inter-region movement is either inaccurately characterized by the available commuting data or insignificant once its initial impact on transmission has subsided.
NASA Astrophysics Data System (ADS)
Bloom, Jeffrey A.; Alonso, Rafael
2003-06-01
There are two primary challenges to monitoring the Web for steganographic media: finding suspect media and examining those found. The challenge that has received a great deal of attention is the second of these, the steganalysis problem. The other challenge, and one that has received much less attention, is the search problem. How does the steganalyzer get the suspect media in the first place? This paper describes an innovative method and architecture to address this search problem. The typical approaches to searching the web for covert communications are often based on the concept of "crawling" the Web via a smart "spider." Such spiders find new pages by following ever-expanding chains of links from one page to many next pages. Rather than seek pages by chasing links from other pages, we find candidate pages by identifying requests to access pages. To do this we monitor traffic on Internet backbones, identify and log HTTP requests, and use this information to guide our process. Our approach has the advantages that we examine pages to which no links exist, we examine pages as soon as they are requested, and we concentrate resources only on active pages, rather than examining pages that are never viewed.
Euthanasia Assessment in Ebola Virus Infected Nonhuman Primates
Warren, Travis K.; Trefry, John C.; Marko, Shannon T.; Chance, Taylor B.; Wells, Jay B.; Pratt, William D.; Johnson, Joshua C.; Mucker, Eric M.; Norris, Sarah L.; Chappell, Mark; Dye, John M.; Honko, Anna N.
2014-01-01
Multiple products are being developed for use against filoviral infections. Efficacy for these products will likely be demonstrated in nonhuman primate models of filoviral disease to satisfy licensure requirements under the Animal Rule, or to supplement human data. Typically, the endpoint for efficacy assessment will be survival following challenge; however, there exists no standardized approach for assessing the health or euthanasia criteria for filovirus-exposed nonhuman primates. Consideration of objective criteria is important to (a) ensure test subjects are euthanized without unnecessary distress; (b) enhance the likelihood that animals exhibiting mild or moderate signs of disease are not prematurely euthanized; (c) minimize the occurrence of spontaneous deaths and loss of end-stage samples; (d) enhance the reproducibility of experiments between different researchers; and (e) provide a defensible rationale for euthanasia decisions that withstands regulatory scrutiny. Historic records were compiled for 58 surviving and non-surviving monkeys exposed to Ebola virus at the US Army Medical Research Institute of Infectious Diseases. Clinical pathology parameters were statistically analyzed and those exhibiting predicative value for survival are reported. These findings may be useful for standardization of objective euthanasia assessments in rhesus monkeys exposed to Ebola virus and may serve as a useful approach for other standardization efforts. PMID:25421892
Tanglegrams for rooted phylogenetic trees and networks
Scornavacca, Celine; Zickmann, Franziska; Huson, Daniel H.
2011-01-01
Motivation: In systematic biology, one is often faced with the task of comparing different phylogenetic trees, in particular in multi-gene analysis or cospeciation studies. One approach is to use a tanglegram in which two rooted phylogenetic trees are drawn opposite each other, using auxiliary lines to connect matching taxa. There is an increasing interest in using rooted phylogenetic networks to represent evolutionary history, so as to explicitly represent reticulate events, such as horizontal gene transfer, hybridization or reassortment. Thus, the question arises how to define and compute a tanglegram for such networks. Results: In this article, we present the first formal definition of a tanglegram for rooted phylogenetic networks and present a heuristic approach for computing one, called the NN-tanglegram method. We compare the performance of our method with existing tree tanglegram algorithms and also show a typical application to real biological datasets. For maximum usability, the algorithm does not require that the trees or networks are bifurcating or bicombining, or that they are on identical taxon sets. Availability: The algorithm is implemented in our program Dendroscope 3, which is freely available from www.dendroscope.org. Contact: scornava@informatik.uni-tuebingen.de; huson@informatik.uni-tuebingen.de PMID:21685078
Implementing and Assessing Inquiry-Based Learning through the CAREER Award
NASA Astrophysics Data System (ADS)
Brudzinski, M. R.
2011-12-01
In order to fully attain the benefits of inquiry-based learning, instructors who typically employ the traditional lecture format need to make many adjustments to their approach. This change in styles can be intimidating and logistically difficult to overcome, both for instructors and students, such that a stepwise approach to this transformation is likely to be more manageable. In this session, I will describe a series of tools to promote inquiry-based learning that I am helping to implement and assess in classroom courses and student research projects. I will demonstrate the importance of integrating with existing institutional initiatives as well as recognizing how student development plays a key role in student engagement. Some of the features I will highlight include: defining both student learning outcomes and student development outcomes, converting content training to be self-directed and asynchronous, utilizing conceptests to help students practice thinking like scientists, and employing both objective pre/post assessment and student self-reflective assessment. Lastly, I will reflect on how the well-defined goal of teaching and research integration in the CAREER award solicitation resonated with me even as an undergraduate and helped inspire my early career.
Intrinsic random functions for mitigation of atmospheric effects in terrestrial radar interferometry
NASA Astrophysics Data System (ADS)
Butt, Jemil; Wieser, Andreas; Conzett, Stefan
2017-06-01
The benefits of terrestrial radar interferometry (TRI) for deformation monitoring are restricted by the influence of changing meteorological conditions contaminating the potentially highly precise measurements with spurious deformations. This is especially the case when the measurement setup includes long distances between instrument and objects of interest and the topography affecting atmospheric refraction is complex. These situations are typically encountered with geo-monitoring in mountainous regions, e.g. with glaciers, landslides or volcanoes. We propose and explain an approach for the mitigation of atmospheric influences based on the theory of intrinsic random functions of order k (IRF-k) generalizing existing approaches based on ordinary least squares estimation of trend functions. This class of random functions retains convenient computational properties allowing for rigorous statistical inference while still permitting to model stochastic spatial phenomena which are non-stationary in mean and variance. We explore the correspondence between the properties of the IRF-k and the properties of the measurement process. In an exemplary case study, we find that our method reduces the time needed to obtain reliable estimates of glacial movements from 12 h down to 0.5 h compared to simple temporal averaging procedures.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
Impact detection and analysis/health monitoring system for composites
NASA Astrophysics Data System (ADS)
Child, James E.; Kumar, Amrita; Beard, Shawn; Qing, Peter; Paslay, Don G.
2006-05-01
This manuscript includes information from test evaluations and development of a smart event detection system for use in monitoring composite rocket motor cases for damaging impacts. The primary purpose of the system as a sentry for case impact event logging is accomplished through; implementation of a passive network of miniaturized piezoelectric sensors, logger with pre-determined force threshold levels, and analysis software. Empirical approaches to structural characterizations and network calibrations along with implementation techniques were successfully evaluated, testing was performed on both unloaded (less propellants) as well as loaded rocket motors with the cylindrical areas being of primary focus. The logged test impact data with known physical network parameters provided for impact location as well as force determination, typically within 3 inches of actual impact location using a 4 foot network grid and force accuracy within 25%of an actual impact force. The simplistic empirical characterization approach along with the robust / flexible sensor grids and battery operated portable logger show promise of a system that can increase confidence in composite integrity for both new assets progressing through manufacturing processes as well as existing assets that may be in storage or transportation.
Malone, Matthew; Goeres, Darla M; Gosbell, Iain; Vickery, Karen; Jensen, Slade; Stoodley, Paul
2017-02-01
The concept of biofilms in human health and disease is now widely accepted as cause of chronic infection. Typically, biofilms show remarkable tolerance to many forms of treatments and the host immune response. This has led to vast increase in research to identify new (and sometimes old) anti-biofilm strategies that demonstrate effectiveness against these tolerant phenotypes. Areas covered: Unfortunately, a standardized methodological approach of biofilm models has not been adopted leading to a large disparity between testing conditions. This has made it almost impossible to compare data across multiple laboratories, leaving large gaps in the evidence. Furthermore, many biofilm models testing anti-biofilm strategies aimed at the medical arena have not considered the matter of relevance to an intended application. This may explain why some in vitro models based on methodological designs that do not consider relevance to an intended application fail when applied in vivo at the clinical level. Expert commentary: This review will explore the issues that need to be considered in developing performance standards for anti-biofilm therapeutics and provide a rationale for the need to standardize models/methods that are clinically relevant. We also provide some rational as to why no standards currently exist.
Shape Biased Low Power Spin Dependent Tunneling Magnetic Field Sensors
NASA Astrophysics Data System (ADS)
Tondra, Mark; Qian, Zhenghong; Wang, Dexin; Nordman, Cathy; Anderson, John
2001-10-01
Spin Dependent Tunneling (SDT) devices are leading candidates for inclusion in a number of Unattended Ground Sensor applications. Continued progress at NVE has pushed their performance to 1OOs of pT I rt. Hz 1 Hz. However, these sensors were designed to use an applied field from an on-chip coil to create an appropriate magnetic sensing configuration. The power required to generate this field (^100mW) is significantly greater than the power budget (^lmW) for a magnetic sensor in an Unattended Ground Sensor (UGS) application. Consequently, a new approach to creating an ideal sensing environment is required. One approach being used at NVE is "shape biasing." This means that the physical layout of the SDT sensing elements is such that the magnetization of the sensing film is correct even when no biasing field is applied. Sensors have been fabricated using this technique and show reasonable promise for UGS applications. Some performance trade-offs exist. The power is easily tinder 1 MW, but the sensitivity is typically lower by a factor of 10. This talk will discuss some of the design details of these sensors as well as their expected ultimate performance.
MINIMAL PROSPECTS FOR RADIO DETECTION OF EXTENSIVE AIR SHOWERS IN THE ATMOSPHERE OF JUPITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bray, J. D.; Nelles, A., E-mail: justin.bray@manchester.ac.uk
One possible approach for detecting ultra-high-energy cosmic rays and neutrinos is to search for radio emission from extensive air showers created when they interact in the atmosphere of Jupiter, effectively utilizing Jupiter as a particle detector. We investigate the potential of this approach. For searches with current or planned radio telescopes we find that the effective area for detection of cosmic rays is substantial (∼3 × 10{sup 7} km{sup 2}), but the acceptance angle is so small that the typical geometric aperture (∼10{sup 3} km{sup 2} sr) is less than that of existing terrestrial detectors, and cosmic rays also cannotmore » be detected below an extremely high threshold energy (∼10{sup 23} eV). The geometric aperture for neutrinos is slightly larger, and greater sensitivity can be achieved with a radio detector on a Jupiter-orbiting satellite, but in neither case is this sufficient to constitute a practical detection technique. Exploitation of the large surface area of Jupiter for detecting ultra-high-energy particles remains a long-term prospect that will require a different technique, such as orbital fluorescence detection.« less
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement
NASA Astrophysics Data System (ADS)
Oguz, Ipek; Styner, Martin; Sanchez, Mar; Shi, Yundi; Sonka, Milan
2015-03-01
Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method. Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 +/- 0.03mm and an unsigned surface error of 0.42 +/- 0.03mm over the whole brain. Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 +/- 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Machine learning properties of materials and molecules with entropy-regularized kernels
NASA Astrophysics Data System (ADS)
Ceriotti, Michele; Bartók, Albert; CsáNyi, GáBor; de, Sandip
Application of machine-learning methods to physics, chemistry and materials science is gaining traction as a strategy to obtain accurate predictions of the properties of matter at a fraction of the typical cost of quantum mechanical electronic structure calculations. In this endeavor, one can leverage general-purpose frameworks for supervised-learning. It is however very important that the input data - for instance the positions of atoms in a molecule or solid - is processed into a form that reflects all the underlying physical symmetries of the problem, and that possesses the regularity properties that are required by machine-learning algorithms. Here we introduce a general strategy to build a representation of this kind. We will start from existing approaches to compare local environments (basically, groups of atoms), and combine them using techniques borrowed from optimal transport theory, discussing the relation between this idea and additive energy decompositions. We will present a few examples demonstrating the potential of this approach as a tool to predict molecular and materials' properties with an accuracy on par with state-of-the-art electronic structure methods. MARVEL NCCR (Swiss National Science Foundation) and ERC StG HBMAP (European Research Council, G.A. 677013).
Inferring diffusion in single live cells at the single-molecule level
Robson, Alex; Burrage, Kevin; Leake, Mark C.
2013-01-01
The movement of molecules inside living cells is a fundamental feature of biological processes. The ability to both observe and analyse the details of molecular diffusion in vivo at the single-molecule and single-cell level can add significant insight into understanding molecular architectures of diffusing molecules and the nanoscale environment in which the molecules diffuse. The tool of choice for monitoring dynamic molecular localization in live cells is fluorescence microscopy, especially so combining total internal reflection fluorescence with the use of fluorescent protein (FP) reporters in offering exceptional imaging contrast for dynamic processes in the cell membrane under relatively physiological conditions compared with competing single-molecule techniques. There exist several different complex modes of diffusion, and discriminating these from each other is challenging at the molecular level owing to underlying stochastic behaviour. Analysis is traditionally performed using mean square displacements of tracked particles; however, this generally requires more data points than is typical for single FP tracks owing to photophysical instability. Presented here is a novel approach allowing robust Bayesian ranking of diffusion processes to discriminate multiple complex modes probabilistically. It is a computational approach that biologists can use to understand single-molecule features in live cells. PMID:23267182
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Composition of corn dry-grind ethanol by-products: DDGS, wet cake, and thin stillage.
Kim, Youngmi; Mosier, Nathan S; Hendrickson, Rick; Ezeji, Thaddeus; Blaschek, Hans; Dien, Bruce; Cotta, Michael; Dale, Bruce; Ladisch, Michael R
2008-08-01
DDGS and wet distillers' grains are the major co-products of the dry grind ethanol facilities. As they are mainly used as animal feed, a typical compositional analysis of the DDGS and wet distillers' grains mainly focuses on defining the feedstock's nutritional characteristics. With an increasing demand for fuel ethanol, the DDGS and wet distillers' grains are viewed as a potential bridge feedstock for ethanol production from other cellulosic biomass. The introduction of DDGS or wet distillers' grains as an additional feed to the existing dry grind plants for increased ethanol yield requires a different approach to the compositional analysis of the material. Rather than focusing on its nutritional value, this new approach aims at determining more detailed chemical composition, especially on polymeric sugars such as cellulose, starch and xylan, which release fermentable sugars upon enzymatic hydrolysis. In this paper we present a detailed and complete compositional analysis procedure suggested for DDGS and wet distillers' grains, as well as the resulting compositions completed by three different research groups. Polymeric sugars, crude protein, crude oil and ash contents of DDGS and wet distillers' grains were accurately and reproducibly determined by the compositional analysis procedure described in this paper.